liblognorm-2.0.6/ 0000755 0001750 0001750 00000000000 13370251163 010665 5 0000000 0000000 liblognorm-2.0.6/README 0000644 0001750 0001750 00000003755 13273030617 011500 0000000 0000000 Liblognorm is a fast-samples based normalization library.
More information on liblognorm can be found at
http://www.liblognorm.com
Liblognorm evolves since several years and was intially meant to be used primarily with
the Mitre CEE effort. Consequently, the initial version of liblognorm (0.x)
uses the libee CEE support library in its API.
As time evolved, the initial CEE schema underwent considerable change. Even
worse, Mitre lost funding for CEE. While the CEE ideas survived as part
of Red Hat-driven "Project Lumberjack", the data structures became greatly
simplified and JSON based. That effectively made libee obsolete (and also
in parts libestr, which was specifically written to support CEE's
initial requirement of embedded NUL chars in strings).
In 2013, Pavel Levshin converted liblognorm to native JSON, which helped
improve performance and simplicity for many client applications.
Unfortunately, this change broke interface compatibility (and there was
no way to avoid that, obviously...).
In 2015, most parts of liblognorm were redesigned and rewritten as part
of Rainer Gerhards' master thesis. For full technical details of how
liblognorm operates, and why it is so fast, please have a look at
https://www.researchgate.net/publication/310545144_Efficient_Normalization_of_IT_Log_Messages_under_Realtime_Conditions
The current library is the result of that effort. Application developers
are encouraged to switch to this version, as it provides the benefit of
a simpler API. This version is now being tracked by the git master branch.
However, if you need to stick to the old API, there is a git branch
liblognorm0, which contains the previous version of the library. This
branch is also maintained for important bug fixes, so it is safe to use.
We recommend that packagers create packages both for liblognorm0 and
liblognorm1. Note that liblognorm's development packages cannot
coexist on the same system as the PKGCONFIG system would get into
trouble. Adiscon's own packages follow this schema.
liblognorm-2.0.6/m4/ 0000755 0001750 0001750 00000000000 13370251163 011205 5 0000000 0000000 liblognorm-2.0.6/m4/lt~obsolete.m4 0000644 0001750 0001750 00000013774 13370251147 013757 0000000 0000000 # lt~obsolete.m4 -- aclocal satisfying obsolete definitions. -*-Autoconf-*-
#
# Copyright (C) 2004-2005, 2007, 2009, 2011-2015 Free Software
# Foundation, Inc.
# Written by Scott James Remnant, 2004.
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
# serial 5 lt~obsolete.m4
# These exist entirely to fool aclocal when bootstrapping libtool.
#
# In the past libtool.m4 has provided macros via AC_DEFUN (or AU_DEFUN),
# which have later been changed to m4_define as they aren't part of the
# exported API, or moved to Autoconf or Automake where they belong.
#
# The trouble is, aclocal is a bit thick. It'll see the old AC_DEFUN
# in /usr/share/aclocal/libtool.m4 and remember it, then when it sees us
# using a macro with the same name in our local m4/libtool.m4 it'll
# pull the old libtool.m4 in (it doesn't see our shiny new m4_define
# and doesn't know about Autoconf macros at all.)
#
# So we provide this file, which has a silly filename so it's always
# included after everything else. This provides aclocal with the
# AC_DEFUNs it wants, but when m4 processes it, it doesn't do anything
# because those macros already exist, or will be overwritten later.
# We use AC_DEFUN over AU_DEFUN for compatibility with aclocal-1.6.
#
# Anytime we withdraw an AC_DEFUN or AU_DEFUN, remember to add it here.
# Yes, that means every name once taken will need to remain here until
# we give up compatibility with versions before 1.7, at which point
# we need to keep only those names which we still refer to.
# This is to help aclocal find these macros, as it can't see m4_define.
AC_DEFUN([LTOBSOLETE_VERSION], [m4_if([1])])
m4_ifndef([AC_LIBTOOL_LINKER_OPTION], [AC_DEFUN([AC_LIBTOOL_LINKER_OPTION])])
m4_ifndef([AC_PROG_EGREP], [AC_DEFUN([AC_PROG_EGREP])])
m4_ifndef([_LT_AC_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH])])
m4_ifndef([_LT_AC_SHELL_INIT], [AC_DEFUN([_LT_AC_SHELL_INIT])])
m4_ifndef([_LT_AC_SYS_LIBPATH_AIX], [AC_DEFUN([_LT_AC_SYS_LIBPATH_AIX])])
m4_ifndef([_LT_PROG_LTMAIN], [AC_DEFUN([_LT_PROG_LTMAIN])])
m4_ifndef([_LT_AC_TAGVAR], [AC_DEFUN([_LT_AC_TAGVAR])])
m4_ifndef([AC_LTDL_ENABLE_INSTALL], [AC_DEFUN([AC_LTDL_ENABLE_INSTALL])])
m4_ifndef([AC_LTDL_PREOPEN], [AC_DEFUN([AC_LTDL_PREOPEN])])
m4_ifndef([_LT_AC_SYS_COMPILER], [AC_DEFUN([_LT_AC_SYS_COMPILER])])
m4_ifndef([_LT_AC_LOCK], [AC_DEFUN([_LT_AC_LOCK])])
m4_ifndef([AC_LIBTOOL_SYS_OLD_ARCHIVE], [AC_DEFUN([AC_LIBTOOL_SYS_OLD_ARCHIVE])])
m4_ifndef([_LT_AC_TRY_DLOPEN_SELF], [AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF])])
m4_ifndef([AC_LIBTOOL_PROG_CC_C_O], [AC_DEFUN([AC_LIBTOOL_PROG_CC_C_O])])
m4_ifndef([AC_LIBTOOL_SYS_HARD_LINK_LOCKS], [AC_DEFUN([AC_LIBTOOL_SYS_HARD_LINK_LOCKS])])
m4_ifndef([AC_LIBTOOL_OBJDIR], [AC_DEFUN([AC_LIBTOOL_OBJDIR])])
m4_ifndef([AC_LTDL_OBJDIR], [AC_DEFUN([AC_LTDL_OBJDIR])])
m4_ifndef([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH], [AC_DEFUN([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH])])
m4_ifndef([AC_LIBTOOL_SYS_LIB_STRIP], [AC_DEFUN([AC_LIBTOOL_SYS_LIB_STRIP])])
m4_ifndef([AC_PATH_MAGIC], [AC_DEFUN([AC_PATH_MAGIC])])
m4_ifndef([AC_PROG_LD_GNU], [AC_DEFUN([AC_PROG_LD_GNU])])
m4_ifndef([AC_PROG_LD_RELOAD_FLAG], [AC_DEFUN([AC_PROG_LD_RELOAD_FLAG])])
m4_ifndef([AC_DEPLIBS_CHECK_METHOD], [AC_DEFUN([AC_DEPLIBS_CHECK_METHOD])])
m4_ifndef([AC_LIBTOOL_PROG_COMPILER_NO_RTTI], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_NO_RTTI])])
m4_ifndef([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE], [AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE])])
m4_ifndef([AC_LIBTOOL_PROG_COMPILER_PIC], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_PIC])])
m4_ifndef([AC_LIBTOOL_PROG_LD_SHLIBS], [AC_DEFUN([AC_LIBTOOL_PROG_LD_SHLIBS])])
m4_ifndef([AC_LIBTOOL_POSTDEP_PREDEP], [AC_DEFUN([AC_LIBTOOL_POSTDEP_PREDEP])])
m4_ifndef([LT_AC_PROG_EGREP], [AC_DEFUN([LT_AC_PROG_EGREP])])
m4_ifndef([LT_AC_PROG_SED], [AC_DEFUN([LT_AC_PROG_SED])])
m4_ifndef([_LT_CC_BASENAME], [AC_DEFUN([_LT_CC_BASENAME])])
m4_ifndef([_LT_COMPILER_BOILERPLATE], [AC_DEFUN([_LT_COMPILER_BOILERPLATE])])
m4_ifndef([_LT_LINKER_BOILERPLATE], [AC_DEFUN([_LT_LINKER_BOILERPLATE])])
m4_ifndef([_AC_PROG_LIBTOOL], [AC_DEFUN([_AC_PROG_LIBTOOL])])
m4_ifndef([AC_LIBTOOL_SETUP], [AC_DEFUN([AC_LIBTOOL_SETUP])])
m4_ifndef([_LT_AC_CHECK_DLFCN], [AC_DEFUN([_LT_AC_CHECK_DLFCN])])
m4_ifndef([AC_LIBTOOL_SYS_DYNAMIC_LINKER], [AC_DEFUN([AC_LIBTOOL_SYS_DYNAMIC_LINKER])])
m4_ifndef([_LT_AC_TAGCONFIG], [AC_DEFUN([_LT_AC_TAGCONFIG])])
m4_ifndef([AC_DISABLE_FAST_INSTALL], [AC_DEFUN([AC_DISABLE_FAST_INSTALL])])
m4_ifndef([_LT_AC_LANG_CXX], [AC_DEFUN([_LT_AC_LANG_CXX])])
m4_ifndef([_LT_AC_LANG_F77], [AC_DEFUN([_LT_AC_LANG_F77])])
m4_ifndef([_LT_AC_LANG_GCJ], [AC_DEFUN([_LT_AC_LANG_GCJ])])
m4_ifndef([AC_LIBTOOL_LANG_C_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_C_CONFIG])])
m4_ifndef([_LT_AC_LANG_C_CONFIG], [AC_DEFUN([_LT_AC_LANG_C_CONFIG])])
m4_ifndef([AC_LIBTOOL_LANG_CXX_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_CXX_CONFIG])])
m4_ifndef([_LT_AC_LANG_CXX_CONFIG], [AC_DEFUN([_LT_AC_LANG_CXX_CONFIG])])
m4_ifndef([AC_LIBTOOL_LANG_F77_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_F77_CONFIG])])
m4_ifndef([_LT_AC_LANG_F77_CONFIG], [AC_DEFUN([_LT_AC_LANG_F77_CONFIG])])
m4_ifndef([AC_LIBTOOL_LANG_GCJ_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_GCJ_CONFIG])])
m4_ifndef([_LT_AC_LANG_GCJ_CONFIG], [AC_DEFUN([_LT_AC_LANG_GCJ_CONFIG])])
m4_ifndef([AC_LIBTOOL_LANG_RC_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_RC_CONFIG])])
m4_ifndef([_LT_AC_LANG_RC_CONFIG], [AC_DEFUN([_LT_AC_LANG_RC_CONFIG])])
m4_ifndef([AC_LIBTOOL_CONFIG], [AC_DEFUN([AC_LIBTOOL_CONFIG])])
m4_ifndef([_LT_AC_FILE_LTDLL_C], [AC_DEFUN([_LT_AC_FILE_LTDLL_C])])
m4_ifndef([_LT_REQUIRED_DARWIN_CHECKS], [AC_DEFUN([_LT_REQUIRED_DARWIN_CHECKS])])
m4_ifndef([_LT_AC_PROG_CXXCPP], [AC_DEFUN([_LT_AC_PROG_CXXCPP])])
m4_ifndef([_LT_PREPARE_SED_QUOTE_VARS], [AC_DEFUN([_LT_PREPARE_SED_QUOTE_VARS])])
m4_ifndef([_LT_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_PROG_ECHO_BACKSLASH])])
m4_ifndef([_LT_PROG_F77], [AC_DEFUN([_LT_PROG_F77])])
m4_ifndef([_LT_PROG_FC], [AC_DEFUN([_LT_PROG_FC])])
m4_ifndef([_LT_PROG_CXX], [AC_DEFUN([_LT_PROG_CXX])])
liblognorm-2.0.6/m4/ltsugar.m4 0000644 0001750 0001750 00000010440 13370251147 013051 0000000 0000000 # ltsugar.m4 -- libtool m4 base layer. -*-Autoconf-*-
#
# Copyright (C) 2004-2005, 2007-2008, 2011-2015 Free Software
# Foundation, Inc.
# Written by Gary V. Vaughan, 2004
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
# serial 6 ltsugar.m4
# This is to help aclocal find these macros, as it can't see m4_define.
AC_DEFUN([LTSUGAR_VERSION], [m4_if([0.1])])
# lt_join(SEP, ARG1, [ARG2...])
# -----------------------------
# Produce ARG1SEPARG2...SEPARGn, omitting [] arguments and their
# associated separator.
# Needed until we can rely on m4_join from Autoconf 2.62, since all earlier
# versions in m4sugar had bugs.
m4_define([lt_join],
[m4_if([$#], [1], [],
[$#], [2], [[$2]],
[m4_if([$2], [], [], [[$2]_])$0([$1], m4_shift(m4_shift($@)))])])
m4_define([_lt_join],
[m4_if([$#$2], [2], [],
[m4_if([$2], [], [], [[$1$2]])$0([$1], m4_shift(m4_shift($@)))])])
# lt_car(LIST)
# lt_cdr(LIST)
# ------------
# Manipulate m4 lists.
# These macros are necessary as long as will still need to support
# Autoconf-2.59, which quotes differently.
m4_define([lt_car], [[$1]])
m4_define([lt_cdr],
[m4_if([$#], 0, [m4_fatal([$0: cannot be called without arguments])],
[$#], 1, [],
[m4_dquote(m4_shift($@))])])
m4_define([lt_unquote], $1)
# lt_append(MACRO-NAME, STRING, [SEPARATOR])
# ------------------------------------------
# Redefine MACRO-NAME to hold its former content plus 'SEPARATOR''STRING'.
# Note that neither SEPARATOR nor STRING are expanded; they are appended
# to MACRO-NAME as is (leaving the expansion for when MACRO-NAME is invoked).
# No SEPARATOR is output if MACRO-NAME was previously undefined (different
# than defined and empty).
#
# This macro is needed until we can rely on Autoconf 2.62, since earlier
# versions of m4sugar mistakenly expanded SEPARATOR but not STRING.
m4_define([lt_append],
[m4_define([$1],
m4_ifdef([$1], [m4_defn([$1])[$3]])[$2])])
# lt_combine(SEP, PREFIX-LIST, INFIX, SUFFIX1, [SUFFIX2...])
# ----------------------------------------------------------
# Produce a SEP delimited list of all paired combinations of elements of
# PREFIX-LIST with SUFFIX1 through SUFFIXn. Each element of the list
# has the form PREFIXmINFIXSUFFIXn.
# Needed until we can rely on m4_combine added in Autoconf 2.62.
m4_define([lt_combine],
[m4_if(m4_eval([$# > 3]), [1],
[m4_pushdef([_Lt_sep], [m4_define([_Lt_sep], m4_defn([lt_car]))])]]dnl
[[m4_foreach([_Lt_prefix], [$2],
[m4_foreach([_Lt_suffix],
]m4_dquote(m4_dquote(m4_shift(m4_shift(m4_shift($@)))))[,
[_Lt_sep([$1])[]m4_defn([_Lt_prefix])[$3]m4_defn([_Lt_suffix])])])])])
# lt_if_append_uniq(MACRO-NAME, VARNAME, [SEPARATOR], [UNIQ], [NOT-UNIQ])
# -----------------------------------------------------------------------
# Iff MACRO-NAME does not yet contain VARNAME, then append it (delimited
# by SEPARATOR if supplied) and expand UNIQ, else NOT-UNIQ.
m4_define([lt_if_append_uniq],
[m4_ifdef([$1],
[m4_if(m4_index([$3]m4_defn([$1])[$3], [$3$2$3]), [-1],
[lt_append([$1], [$2], [$3])$4],
[$5])],
[lt_append([$1], [$2], [$3])$4])])
# lt_dict_add(DICT, KEY, VALUE)
# -----------------------------
m4_define([lt_dict_add],
[m4_define([$1($2)], [$3])])
# lt_dict_add_subkey(DICT, KEY, SUBKEY, VALUE)
# --------------------------------------------
m4_define([lt_dict_add_subkey],
[m4_define([$1($2:$3)], [$4])])
# lt_dict_fetch(DICT, KEY, [SUBKEY])
# ----------------------------------
m4_define([lt_dict_fetch],
[m4_ifval([$3],
m4_ifdef([$1($2:$3)], [m4_defn([$1($2:$3)])]),
m4_ifdef([$1($2)], [m4_defn([$1($2)])]))])
# lt_if_dict_fetch(DICT, KEY, [SUBKEY], VALUE, IF-TRUE, [IF-FALSE])
# -----------------------------------------------------------------
m4_define([lt_if_dict_fetch],
[m4_if(lt_dict_fetch([$1], [$2], [$3]), [$4],
[$5],
[$6])])
# lt_dict_filter(DICT, [SUBKEY], VALUE, [SEPARATOR], KEY, [...])
# --------------------------------------------------------------
m4_define([lt_dict_filter],
[m4_if([$5], [], [],
[lt_join(m4_quote(m4_default([$4], [[, ]])),
lt_unquote(m4_split(m4_normalize(m4_foreach(_Lt_key, lt_car([m4_shiftn(4, $@)]),
[lt_if_dict_fetch([$1], _Lt_key, [$2], [$3], [_Lt_key ])])))))])[]dnl
])
liblognorm-2.0.6/m4/libtool.m4 0000644 0001750 0001750 00001126171 13370251147 013046 0000000 0000000 # libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
#
# Copyright (C) 1996-2001, 2003-2015 Free Software Foundation, Inc.
# Written by Gordon Matzigkeit, 1996
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
m4_define([_LT_COPYING], [dnl
# Copyright (C) 2014 Free Software Foundation, Inc.
# This is free software; see the source for copying conditions. There is NO
# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# GNU Libtool is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of of the License, or
# (at your option) any later version.
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program or library that is built
# using GNU Libtool, you may include this file under the same
# distribution terms that you use for the rest of that program.
#
# GNU Libtool is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
])
# serial 58 LT_INIT
# LT_PREREQ(VERSION)
# ------------------
# Complain and exit if this libtool version is less that VERSION.
m4_defun([LT_PREREQ],
[m4_if(m4_version_compare(m4_defn([LT_PACKAGE_VERSION]), [$1]), -1,
[m4_default([$3],
[m4_fatal([Libtool version $1 or higher is required],
63)])],
[$2])])
# _LT_CHECK_BUILDDIR
# ------------------
# Complain if the absolute build directory name contains unusual characters
m4_defun([_LT_CHECK_BUILDDIR],
[case `pwd` in
*\ * | *\ *)
AC_MSG_WARN([Libtool does not cope well with whitespace in `pwd`]) ;;
esac
])
# LT_INIT([OPTIONS])
# ------------------
AC_DEFUN([LT_INIT],
[AC_PREREQ([2.62])dnl We use AC_PATH_PROGS_FEATURE_CHECK
AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl
AC_BEFORE([$0], [LT_LANG])dnl
AC_BEFORE([$0], [LT_OUTPUT])dnl
AC_BEFORE([$0], [LTDL_INIT])dnl
m4_require([_LT_CHECK_BUILDDIR])dnl
dnl Autoconf doesn't catch unexpanded LT_ macros by default:
m4_pattern_forbid([^_?LT_[A-Z_]+$])dnl
m4_pattern_allow([^(_LT_EOF|LT_DLGLOBAL|LT_DLLAZY_OR_NOW|LT_MULTI_MODULE)$])dnl
dnl aclocal doesn't pull ltoptions.m4, ltsugar.m4, or ltversion.m4
dnl unless we require an AC_DEFUNed macro:
AC_REQUIRE([LTOPTIONS_VERSION])dnl
AC_REQUIRE([LTSUGAR_VERSION])dnl
AC_REQUIRE([LTVERSION_VERSION])dnl
AC_REQUIRE([LTOBSOLETE_VERSION])dnl
m4_require([_LT_PROG_LTMAIN])dnl
_LT_SHELL_INIT([SHELL=${CONFIG_SHELL-/bin/sh}])
dnl Parse OPTIONS
_LT_SET_OPTIONS([$0], [$1])
# This can be used to rebuild libtool when needed
LIBTOOL_DEPS=$ltmain
# Always use our own libtool.
LIBTOOL='$(SHELL) $(top_builddir)/libtool'
AC_SUBST(LIBTOOL)dnl
_LT_SETUP
# Only expand once:
m4_define([LT_INIT])
])# LT_INIT
# Old names:
AU_ALIAS([AC_PROG_LIBTOOL], [LT_INIT])
AU_ALIAS([AM_PROG_LIBTOOL], [LT_INIT])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_PROG_LIBTOOL], [])
dnl AC_DEFUN([AM_PROG_LIBTOOL], [])
# _LT_PREPARE_CC_BASENAME
# -----------------------
m4_defun([_LT_PREPARE_CC_BASENAME], [
# Calculate cc_basename. Skip known compiler wrappers and cross-prefix.
func_cc_basename ()
{
for cc_temp in @S|@*""; do
case $cc_temp in
compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;;
distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;;
\-*) ;;
*) break;;
esac
done
func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"`
}
])# _LT_PREPARE_CC_BASENAME
# _LT_CC_BASENAME(CC)
# -------------------
# It would be clearer to call AC_REQUIREs from _LT_PREPARE_CC_BASENAME,
# but that macro is also expanded into generated libtool script, which
# arranges for $SED and $ECHO to be set by different means.
m4_defun([_LT_CC_BASENAME],
[m4_require([_LT_PREPARE_CC_BASENAME])dnl
AC_REQUIRE([_LT_DECL_SED])dnl
AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl
func_cc_basename $1
cc_basename=$func_cc_basename_result
])
# _LT_FILEUTILS_DEFAULTS
# ----------------------
# It is okay to use these file commands and assume they have been set
# sensibly after 'm4_require([_LT_FILEUTILS_DEFAULTS])'.
m4_defun([_LT_FILEUTILS_DEFAULTS],
[: ${CP="cp -f"}
: ${MV="mv -f"}
: ${RM="rm -f"}
])# _LT_FILEUTILS_DEFAULTS
# _LT_SETUP
# ---------
m4_defun([_LT_SETUP],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
AC_REQUIRE([_LT_PREPARE_SED_QUOTE_VARS])dnl
AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl
_LT_DECL([], [PATH_SEPARATOR], [1], [The PATH separator for the build system])dnl
dnl
_LT_DECL([], [host_alias], [0], [The host system])dnl
_LT_DECL([], [host], [0])dnl
_LT_DECL([], [host_os], [0])dnl
dnl
_LT_DECL([], [build_alias], [0], [The build system])dnl
_LT_DECL([], [build], [0])dnl
_LT_DECL([], [build_os], [0])dnl
dnl
AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([LT_PATH_LD])dnl
AC_REQUIRE([LT_PATH_NM])dnl
dnl
AC_REQUIRE([AC_PROG_LN_S])dnl
test -z "$LN_S" && LN_S="ln -s"
_LT_DECL([], [LN_S], [1], [Whether we need soft or hard links])dnl
dnl
AC_REQUIRE([LT_CMD_MAX_LEN])dnl
_LT_DECL([objext], [ac_objext], [0], [Object file suffix (normally "o")])dnl
_LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl
dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_CHECK_SHELL_FEATURES])dnl
m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl
m4_require([_LT_CMD_RELOAD])dnl
m4_require([_LT_CHECK_MAGIC_METHOD])dnl
m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl
m4_require([_LT_CMD_OLD_ARCHIVE])dnl
m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
m4_require([_LT_WITH_SYSROOT])dnl
m4_require([_LT_CMD_TRUNCATE])dnl
_LT_CONFIG_LIBTOOL_INIT([
# See if we are running on zsh, and set the options that allow our
# commands through without removal of \ escapes INIT.
if test -n "\${ZSH_VERSION+set}"; then
setopt NO_GLOB_SUBST
fi
])
if test -n "${ZSH_VERSION+set}"; then
setopt NO_GLOB_SUBST
fi
_LT_CHECK_OBJDIR
m4_require([_LT_TAG_COMPILER])dnl
case $host_os in
aix3*)
# AIX sometimes has problems with the GCC collect2 program. For some
# reason, if we set the COLLECT_NAMES environment variable, the problems
# vanish in a puff of smoke.
if test set != "${COLLECT_NAMES+set}"; then
COLLECT_NAMES=
export COLLECT_NAMES
fi
;;
esac
# Global variables:
ofile=libtool
can_build_shared=yes
# All known linkers require a '.a' archive for static linking (except MSVC,
# which needs '.lib').
libext=a
with_gnu_ld=$lt_cv_prog_gnu_ld
old_CC=$CC
old_CFLAGS=$CFLAGS
# Set sane defaults for various variables
test -z "$CC" && CC=cc
test -z "$LTCC" && LTCC=$CC
test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS
test -z "$LD" && LD=ld
test -z "$ac_objext" && ac_objext=o
_LT_CC_BASENAME([$compiler])
# Only perform the check for file, if the check method requires it
test -z "$MAGIC_CMD" && MAGIC_CMD=file
case $deplibs_check_method in
file_magic*)
if test "$file_magic_cmd" = '$MAGIC_CMD'; then
_LT_PATH_MAGIC
fi
;;
esac
# Use C for the default configuration in the libtool script
LT_SUPPORTED_TAG([CC])
_LT_LANG_C_CONFIG
_LT_LANG_DEFAULT_CONFIG
_LT_CONFIG_COMMANDS
])# _LT_SETUP
# _LT_PREPARE_SED_QUOTE_VARS
# --------------------------
# Define a few sed substitution that help us do robust quoting.
m4_defun([_LT_PREPARE_SED_QUOTE_VARS],
[# Backslashify metacharacters that are still active within
# double-quoted strings.
sed_quote_subst='s/\([["`$\\]]\)/\\\1/g'
# Same as above, but do not quote variable references.
double_quote_subst='s/\([["`\\]]\)/\\\1/g'
# Sed substitution to delay expansion of an escaped shell variable in a
# double_quote_subst'ed string.
delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g'
# Sed substitution to delay expansion of an escaped single quote.
delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g'
# Sed substitution to avoid accidental globbing in evaled expressions
no_glob_subst='s/\*/\\\*/g'
])
# _LT_PROG_LTMAIN
# ---------------
# Note that this code is called both from 'configure', and 'config.status'
# now that we use AC_CONFIG_COMMANDS to generate libtool. Notably,
# 'config.status' has no value for ac_aux_dir unless we are using Automake,
# so we pass a copy along to make sure it has a sensible value anyway.
m4_defun([_LT_PROG_LTMAIN],
[m4_ifdef([AC_REQUIRE_AUX_FILE], [AC_REQUIRE_AUX_FILE([ltmain.sh])])dnl
_LT_CONFIG_LIBTOOL_INIT([ac_aux_dir='$ac_aux_dir'])
ltmain=$ac_aux_dir/ltmain.sh
])# _LT_PROG_LTMAIN
## ------------------------------------- ##
## Accumulate code for creating libtool. ##
## ------------------------------------- ##
# So that we can recreate a full libtool script including additional
# tags, we accumulate the chunks of code to send to AC_CONFIG_COMMANDS
# in macros and then make a single call at the end using the 'libtool'
# label.
# _LT_CONFIG_LIBTOOL_INIT([INIT-COMMANDS])
# ----------------------------------------
# Register INIT-COMMANDS to be passed to AC_CONFIG_COMMANDS later.
m4_define([_LT_CONFIG_LIBTOOL_INIT],
[m4_ifval([$1],
[m4_append([_LT_OUTPUT_LIBTOOL_INIT],
[$1
])])])
# Initialize.
m4_define([_LT_OUTPUT_LIBTOOL_INIT])
# _LT_CONFIG_LIBTOOL([COMMANDS])
# ------------------------------
# Register COMMANDS to be passed to AC_CONFIG_COMMANDS later.
m4_define([_LT_CONFIG_LIBTOOL],
[m4_ifval([$1],
[m4_append([_LT_OUTPUT_LIBTOOL_COMMANDS],
[$1
])])])
# Initialize.
m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS])
# _LT_CONFIG_SAVE_COMMANDS([COMMANDS], [INIT_COMMANDS])
# -----------------------------------------------------
m4_defun([_LT_CONFIG_SAVE_COMMANDS],
[_LT_CONFIG_LIBTOOL([$1])
_LT_CONFIG_LIBTOOL_INIT([$2])
])
# _LT_FORMAT_COMMENT([COMMENT])
# -----------------------------
# Add leading comment marks to the start of each line, and a trailing
# full-stop to the whole comment if one is not present already.
m4_define([_LT_FORMAT_COMMENT],
[m4_ifval([$1], [
m4_bpatsubst([m4_bpatsubst([$1], [^ *], [# ])],
[['`$\]], [\\\&])]m4_bmatch([$1], [[!?.]$], [], [.])
)])
## ------------------------ ##
## FIXME: Eliminate VARNAME ##
## ------------------------ ##
# _LT_DECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION], [IS-TAGGED?])
# -------------------------------------------------------------------
# CONFIGNAME is the name given to the value in the libtool script.
# VARNAME is the (base) name used in the configure script.
# VALUE may be 0, 1 or 2 for a computed quote escaped value based on
# VARNAME. Any other value will be used directly.
m4_define([_LT_DECL],
[lt_if_append_uniq([lt_decl_varnames], [$2], [, ],
[lt_dict_add_subkey([lt_decl_dict], [$2], [libtool_name],
[m4_ifval([$1], [$1], [$2])])
lt_dict_add_subkey([lt_decl_dict], [$2], [value], [$3])
m4_ifval([$4],
[lt_dict_add_subkey([lt_decl_dict], [$2], [description], [$4])])
lt_dict_add_subkey([lt_decl_dict], [$2],
[tagged?], [m4_ifval([$5], [yes], [no])])])
])
# _LT_TAGDECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION])
# --------------------------------------------------------
m4_define([_LT_TAGDECL], [_LT_DECL([$1], [$2], [$3], [$4], [yes])])
# lt_decl_tag_varnames([SEPARATOR], [VARNAME1...])
# ------------------------------------------------
m4_define([lt_decl_tag_varnames],
[_lt_decl_filter([tagged?], [yes], $@)])
# _lt_decl_filter(SUBKEY, VALUE, [SEPARATOR], [VARNAME1..])
# ---------------------------------------------------------
m4_define([_lt_decl_filter],
[m4_case([$#],
[0], [m4_fatal([$0: too few arguments: $#])],
[1], [m4_fatal([$0: too few arguments: $#: $1])],
[2], [lt_dict_filter([lt_decl_dict], [$1], [$2], [], lt_decl_varnames)],
[3], [lt_dict_filter([lt_decl_dict], [$1], [$2], [$3], lt_decl_varnames)],
[lt_dict_filter([lt_decl_dict], $@)])[]dnl
])
# lt_decl_quote_varnames([SEPARATOR], [VARNAME1...])
# --------------------------------------------------
m4_define([lt_decl_quote_varnames],
[_lt_decl_filter([value], [1], $@)])
# lt_decl_dquote_varnames([SEPARATOR], [VARNAME1...])
# ---------------------------------------------------
m4_define([lt_decl_dquote_varnames],
[_lt_decl_filter([value], [2], $@)])
# lt_decl_varnames_tagged([SEPARATOR], [VARNAME1...])
# ---------------------------------------------------
m4_define([lt_decl_varnames_tagged],
[m4_assert([$# <= 2])dnl
_$0(m4_quote(m4_default([$1], [[, ]])),
m4_ifval([$2], [[$2]], [m4_dquote(lt_decl_tag_varnames)]),
m4_split(m4_normalize(m4_quote(_LT_TAGS)), [ ]))])
m4_define([_lt_decl_varnames_tagged],
[m4_ifval([$3], [lt_combine([$1], [$2], [_], $3)])])
# lt_decl_all_varnames([SEPARATOR], [VARNAME1...])
# ------------------------------------------------
m4_define([lt_decl_all_varnames],
[_$0(m4_quote(m4_default([$1], [[, ]])),
m4_if([$2], [],
m4_quote(lt_decl_varnames),
m4_quote(m4_shift($@))))[]dnl
])
m4_define([_lt_decl_all_varnames],
[lt_join($@, lt_decl_varnames_tagged([$1],
lt_decl_tag_varnames([[, ]], m4_shift($@))))dnl
])
# _LT_CONFIG_STATUS_DECLARE([VARNAME])
# ------------------------------------
# Quote a variable value, and forward it to 'config.status' so that its
# declaration there will have the same value as in 'configure'. VARNAME
# must have a single quote delimited value for this to work.
m4_define([_LT_CONFIG_STATUS_DECLARE],
[$1='`$ECHO "$][$1" | $SED "$delay_single_quote_subst"`'])
# _LT_CONFIG_STATUS_DECLARATIONS
# ------------------------------
# We delimit libtool config variables with single quotes, so when
# we write them to config.status, we have to be sure to quote all
# embedded single quotes properly. In configure, this macro expands
# each variable declared with _LT_DECL (and _LT_TAGDECL) into:
#
# ='`$ECHO "$" | $SED "$delay_single_quote_subst"`'
m4_defun([_LT_CONFIG_STATUS_DECLARATIONS],
[m4_foreach([_lt_var], m4_quote(lt_decl_all_varnames),
[m4_n([_LT_CONFIG_STATUS_DECLARE(_lt_var)])])])
# _LT_LIBTOOL_TAGS
# ----------------
# Output comment and list of tags supported by the script
m4_defun([_LT_LIBTOOL_TAGS],
[_LT_FORMAT_COMMENT([The names of the tagged configurations supported by this script])dnl
available_tags='_LT_TAGS'dnl
])
# _LT_LIBTOOL_DECLARE(VARNAME, [TAG])
# -----------------------------------
# Extract the dictionary values for VARNAME (optionally with TAG) and
# expand to a commented shell variable setting:
#
# # Some comment about what VAR is for.
# visible_name=$lt_internal_name
m4_define([_LT_LIBTOOL_DECLARE],
[_LT_FORMAT_COMMENT(m4_quote(lt_dict_fetch([lt_decl_dict], [$1],
[description])))[]dnl
m4_pushdef([_libtool_name],
m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [libtool_name])))[]dnl
m4_case(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [value])),
[0], [_libtool_name=[$]$1],
[1], [_libtool_name=$lt_[]$1],
[2], [_libtool_name=$lt_[]$1],
[_libtool_name=lt_dict_fetch([lt_decl_dict], [$1], [value])])[]dnl
m4_ifval([$2], [_$2])[]m4_popdef([_libtool_name])[]dnl
])
# _LT_LIBTOOL_CONFIG_VARS
# -----------------------
# Produce commented declarations of non-tagged libtool config variables
# suitable for insertion in the LIBTOOL CONFIG section of the 'libtool'
# script. Tagged libtool config variables (even for the LIBTOOL CONFIG
# section) are produced by _LT_LIBTOOL_TAG_VARS.
m4_defun([_LT_LIBTOOL_CONFIG_VARS],
[m4_foreach([_lt_var],
m4_quote(_lt_decl_filter([tagged?], [no], [], lt_decl_varnames)),
[m4_n([_LT_LIBTOOL_DECLARE(_lt_var)])])])
# _LT_LIBTOOL_TAG_VARS(TAG)
# -------------------------
m4_define([_LT_LIBTOOL_TAG_VARS],
[m4_foreach([_lt_var], m4_quote(lt_decl_tag_varnames),
[m4_n([_LT_LIBTOOL_DECLARE(_lt_var, [$1])])])])
# _LT_TAGVAR(VARNAME, [TAGNAME])
# ------------------------------
m4_define([_LT_TAGVAR], [m4_ifval([$2], [$1_$2], [$1])])
# _LT_CONFIG_COMMANDS
# -------------------
# Send accumulated output to $CONFIG_STATUS. Thanks to the lists of
# variables for single and double quote escaping we saved from calls
# to _LT_DECL, we can put quote escaped variables declarations
# into 'config.status', and then the shell code to quote escape them in
# for loops in 'config.status'. Finally, any additional code accumulated
# from calls to _LT_CONFIG_LIBTOOL_INIT is expanded.
m4_defun([_LT_CONFIG_COMMANDS],
[AC_PROVIDE_IFELSE([LT_OUTPUT],
dnl If the libtool generation code has been placed in $CONFIG_LT,
dnl instead of duplicating it all over again into config.status,
dnl then we will have config.status run $CONFIG_LT later, so it
dnl needs to know what name is stored there:
[AC_CONFIG_COMMANDS([libtool],
[$SHELL $CONFIG_LT || AS_EXIT(1)], [CONFIG_LT='$CONFIG_LT'])],
dnl If the libtool generation code is destined for config.status,
dnl expand the accumulated commands and init code now:
[AC_CONFIG_COMMANDS([libtool],
[_LT_OUTPUT_LIBTOOL_COMMANDS], [_LT_OUTPUT_LIBTOOL_COMMANDS_INIT])])
])#_LT_CONFIG_COMMANDS
# Initialize.
m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS_INIT],
[
# The HP-UX ksh and POSIX shell print the target directory to stdout
# if CDPATH is set.
(unset CDPATH) >/dev/null 2>&1 && unset CDPATH
sed_quote_subst='$sed_quote_subst'
double_quote_subst='$double_quote_subst'
delay_variable_subst='$delay_variable_subst'
_LT_CONFIG_STATUS_DECLARATIONS
LTCC='$LTCC'
LTCFLAGS='$LTCFLAGS'
compiler='$compiler_DEFAULT'
# A function that is used when there is no print builtin or printf.
func_fallback_echo ()
{
eval 'cat <<_LTECHO_EOF
\$[]1
_LTECHO_EOF'
}
# Quote evaled strings.
for var in lt_decl_all_varnames([[ \
]], lt_decl_quote_varnames); do
case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
*[[\\\\\\\`\\"\\\$]]*)
eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes
;;
*)
eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\""
;;
esac
done
# Double-quote double-evaled strings.
for var in lt_decl_all_varnames([[ \
]], lt_decl_dquote_varnames); do
case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
*[[\\\\\\\`\\"\\\$]]*)
eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes
;;
*)
eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\""
;;
esac
done
_LT_OUTPUT_LIBTOOL_INIT
])
# _LT_GENERATED_FILE_INIT(FILE, [COMMENT])
# ------------------------------------
# Generate a child script FILE with all initialization necessary to
# reuse the environment learned by the parent script, and make the
# file executable. If COMMENT is supplied, it is inserted after the
# '#!' sequence but before initialization text begins. After this
# macro, additional text can be appended to FILE to form the body of
# the child script. The macro ends with non-zero status if the
# file could not be fully written (such as if the disk is full).
m4_ifdef([AS_INIT_GENERATED],
[m4_defun([_LT_GENERATED_FILE_INIT],[AS_INIT_GENERATED($@)])],
[m4_defun([_LT_GENERATED_FILE_INIT],
[m4_require([AS_PREPARE])]dnl
[m4_pushdef([AS_MESSAGE_LOG_FD])]dnl
[lt_write_fail=0
cat >$1 <<_ASEOF || lt_write_fail=1
#! $SHELL
# Generated by $as_me.
$2
SHELL=\${CONFIG_SHELL-$SHELL}
export SHELL
_ASEOF
cat >>$1 <<\_ASEOF || lt_write_fail=1
AS_SHELL_SANITIZE
_AS_PREPARE
exec AS_MESSAGE_FD>&1
_ASEOF
test 0 = "$lt_write_fail" && chmod +x $1[]dnl
m4_popdef([AS_MESSAGE_LOG_FD])])])# _LT_GENERATED_FILE_INIT
# LT_OUTPUT
# ---------
# This macro allows early generation of the libtool script (before
# AC_OUTPUT is called), incase it is used in configure for compilation
# tests.
AC_DEFUN([LT_OUTPUT],
[: ${CONFIG_LT=./config.lt}
AC_MSG_NOTICE([creating $CONFIG_LT])
_LT_GENERATED_FILE_INIT(["$CONFIG_LT"],
[# Run this file to recreate a libtool stub with the current configuration.])
cat >>"$CONFIG_LT" <<\_LTEOF
lt_cl_silent=false
exec AS_MESSAGE_LOG_FD>>config.log
{
echo
AS_BOX([Running $as_me.])
} >&AS_MESSAGE_LOG_FD
lt_cl_help="\
'$as_me' creates a local libtool stub from the current configuration,
for use in further configure time tests before the real libtool is
generated.
Usage: $[0] [[OPTIONS]]
-h, --help print this help, then exit
-V, --version print version number, then exit
-q, --quiet do not print progress messages
-d, --debug don't remove temporary files
Report bugs to ."
lt_cl_version="\
m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl
m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION])
configured by $[0], generated by m4_PACKAGE_STRING.
Copyright (C) 2011 Free Software Foundation, Inc.
This config.lt script is free software; the Free Software Foundation
gives unlimited permision to copy, distribute and modify it."
while test 0 != $[#]
do
case $[1] in
--version | --v* | -V )
echo "$lt_cl_version"; exit 0 ;;
--help | --h* | -h )
echo "$lt_cl_help"; exit 0 ;;
--debug | --d* | -d )
debug=: ;;
--quiet | --q* | --silent | --s* | -q )
lt_cl_silent=: ;;
-*) AC_MSG_ERROR([unrecognized option: $[1]
Try '$[0] --help' for more information.]) ;;
*) AC_MSG_ERROR([unrecognized argument: $[1]
Try '$[0] --help' for more information.]) ;;
esac
shift
done
if $lt_cl_silent; then
exec AS_MESSAGE_FD>/dev/null
fi
_LTEOF
cat >>"$CONFIG_LT" <<_LTEOF
_LT_OUTPUT_LIBTOOL_COMMANDS_INIT
_LTEOF
cat >>"$CONFIG_LT" <<\_LTEOF
AC_MSG_NOTICE([creating $ofile])
_LT_OUTPUT_LIBTOOL_COMMANDS
AS_EXIT(0)
_LTEOF
chmod +x "$CONFIG_LT"
# configure is writing to config.log, but config.lt does its own redirection,
# appending to config.log, which fails on DOS, as config.log is still kept
# open by configure. Here we exec the FD to /dev/null, effectively closing
# config.log, so it can be properly (re)opened and appended to by config.lt.
lt_cl_success=:
test yes = "$silent" &&
lt_config_lt_args="$lt_config_lt_args --quiet"
exec AS_MESSAGE_LOG_FD>/dev/null
$SHELL "$CONFIG_LT" $lt_config_lt_args || lt_cl_success=false
exec AS_MESSAGE_LOG_FD>>config.log
$lt_cl_success || AS_EXIT(1)
])# LT_OUTPUT
# _LT_CONFIG(TAG)
# ---------------
# If TAG is the built-in tag, create an initial libtool script with a
# default configuration from the untagged config vars. Otherwise add code
# to config.status for appending the configuration named by TAG from the
# matching tagged config vars.
m4_defun([_LT_CONFIG],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
_LT_CONFIG_SAVE_COMMANDS([
m4_define([_LT_TAG], m4_if([$1], [], [C], [$1]))dnl
m4_if(_LT_TAG, [C], [
# See if we are running on zsh, and set the options that allow our
# commands through without removal of \ escapes.
if test -n "${ZSH_VERSION+set}"; then
setopt NO_GLOB_SUBST
fi
cfgfile=${ofile}T
trap "$RM \"$cfgfile\"; exit 1" 1 2 15
$RM "$cfgfile"
cat <<_LT_EOF >> "$cfgfile"
#! $SHELL
# Generated automatically by $as_me ($PACKAGE) $VERSION
# NOTE: Changes made to this file will be lost: look at ltmain.sh.
# Provide generalized library-building support services.
# Written by Gordon Matzigkeit, 1996
_LT_COPYING
_LT_LIBTOOL_TAGS
# Configured defaults for sys_lib_dlsearch_path munging.
: \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"}
# ### BEGIN LIBTOOL CONFIG
_LT_LIBTOOL_CONFIG_VARS
_LT_LIBTOOL_TAG_VARS
# ### END LIBTOOL CONFIG
_LT_EOF
cat <<'_LT_EOF' >> "$cfgfile"
# ### BEGIN FUNCTIONS SHARED WITH CONFIGURE
_LT_PREPARE_MUNGE_PATH_LIST
_LT_PREPARE_CC_BASENAME
# ### END FUNCTIONS SHARED WITH CONFIGURE
_LT_EOF
case $host_os in
aix3*)
cat <<\_LT_EOF >> "$cfgfile"
# AIX sometimes has problems with the GCC collect2 program. For some
# reason, if we set the COLLECT_NAMES environment variable, the problems
# vanish in a puff of smoke.
if test set != "${COLLECT_NAMES+set}"; then
COLLECT_NAMES=
export COLLECT_NAMES
fi
_LT_EOF
;;
esac
_LT_PROG_LTMAIN
# We use sed instead of cat because bash on DJGPP gets confused if
# if finds mixed CR/LF and LF-only lines. Since sed operates in
# text mode, it properly converts lines to CR/LF. This bash problem
# is reportedly fixed, but why not run on old versions too?
sed '$q' "$ltmain" >> "$cfgfile" \
|| (rm -f "$cfgfile"; exit 1)
mv -f "$cfgfile" "$ofile" ||
(rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
chmod +x "$ofile"
],
[cat <<_LT_EOF >> "$ofile"
dnl Unfortunately we have to use $1 here, since _LT_TAG is not expanded
dnl in a comment (ie after a #).
# ### BEGIN LIBTOOL TAG CONFIG: $1
_LT_LIBTOOL_TAG_VARS(_LT_TAG)
# ### END LIBTOOL TAG CONFIG: $1
_LT_EOF
])dnl /m4_if
],
[m4_if([$1], [], [
PACKAGE='$PACKAGE'
VERSION='$VERSION'
RM='$RM'
ofile='$ofile'], [])
])dnl /_LT_CONFIG_SAVE_COMMANDS
])# _LT_CONFIG
# LT_SUPPORTED_TAG(TAG)
# ---------------------
# Trace this macro to discover what tags are supported by the libtool
# --tag option, using:
# autoconf --trace 'LT_SUPPORTED_TAG:$1'
AC_DEFUN([LT_SUPPORTED_TAG], [])
# C support is built-in for now
m4_define([_LT_LANG_C_enabled], [])
m4_define([_LT_TAGS], [])
# LT_LANG(LANG)
# -------------
# Enable libtool support for the given language if not already enabled.
AC_DEFUN([LT_LANG],
[AC_BEFORE([$0], [LT_OUTPUT])dnl
m4_case([$1],
[C], [_LT_LANG(C)],
[C++], [_LT_LANG(CXX)],
[Go], [_LT_LANG(GO)],
[Java], [_LT_LANG(GCJ)],
[Fortran 77], [_LT_LANG(F77)],
[Fortran], [_LT_LANG(FC)],
[Windows Resource], [_LT_LANG(RC)],
[m4_ifdef([_LT_LANG_]$1[_CONFIG],
[_LT_LANG($1)],
[m4_fatal([$0: unsupported language: "$1"])])])dnl
])# LT_LANG
# _LT_LANG(LANGNAME)
# ------------------
m4_defun([_LT_LANG],
[m4_ifdef([_LT_LANG_]$1[_enabled], [],
[LT_SUPPORTED_TAG([$1])dnl
m4_append([_LT_TAGS], [$1 ])dnl
m4_define([_LT_LANG_]$1[_enabled], [])dnl
_LT_LANG_$1_CONFIG($1)])dnl
])# _LT_LANG
m4_ifndef([AC_PROG_GO], [
############################################################
# NOTE: This macro has been submitted for inclusion into #
# GNU Autoconf as AC_PROG_GO. When it is available in #
# a released version of Autoconf we should remove this #
# macro and use it instead. #
############################################################
m4_defun([AC_PROG_GO],
[AC_LANG_PUSH(Go)dnl
AC_ARG_VAR([GOC], [Go compiler command])dnl
AC_ARG_VAR([GOFLAGS], [Go compiler flags])dnl
_AC_ARG_VAR_LDFLAGS()dnl
AC_CHECK_TOOL(GOC, gccgo)
if test -z "$GOC"; then
if test -n "$ac_tool_prefix"; then
AC_CHECK_PROG(GOC, [${ac_tool_prefix}gccgo], [${ac_tool_prefix}gccgo])
fi
fi
if test -z "$GOC"; then
AC_CHECK_PROG(GOC, gccgo, gccgo, false)
fi
])#m4_defun
])#m4_ifndef
# _LT_LANG_DEFAULT_CONFIG
# -----------------------
m4_defun([_LT_LANG_DEFAULT_CONFIG],
[AC_PROVIDE_IFELSE([AC_PROG_CXX],
[LT_LANG(CXX)],
[m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[LT_LANG(CXX)])])
AC_PROVIDE_IFELSE([AC_PROG_F77],
[LT_LANG(F77)],
[m4_define([AC_PROG_F77], defn([AC_PROG_F77])[LT_LANG(F77)])])
AC_PROVIDE_IFELSE([AC_PROG_FC],
[LT_LANG(FC)],
[m4_define([AC_PROG_FC], defn([AC_PROG_FC])[LT_LANG(FC)])])
dnl The call to [A][M_PROG_GCJ] is quoted like that to stop aclocal
dnl pulling things in needlessly.
AC_PROVIDE_IFELSE([AC_PROG_GCJ],
[LT_LANG(GCJ)],
[AC_PROVIDE_IFELSE([A][M_PROG_GCJ],
[LT_LANG(GCJ)],
[AC_PROVIDE_IFELSE([LT_PROG_GCJ],
[LT_LANG(GCJ)],
[m4_ifdef([AC_PROG_GCJ],
[m4_define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[LT_LANG(GCJ)])])
m4_ifdef([A][M_PROG_GCJ],
[m4_define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[LT_LANG(GCJ)])])
m4_ifdef([LT_PROG_GCJ],
[m4_define([LT_PROG_GCJ], defn([LT_PROG_GCJ])[LT_LANG(GCJ)])])])])])
AC_PROVIDE_IFELSE([AC_PROG_GO],
[LT_LANG(GO)],
[m4_define([AC_PROG_GO], defn([AC_PROG_GO])[LT_LANG(GO)])])
AC_PROVIDE_IFELSE([LT_PROG_RC],
[LT_LANG(RC)],
[m4_define([LT_PROG_RC], defn([LT_PROG_RC])[LT_LANG(RC)])])
])# _LT_LANG_DEFAULT_CONFIG
# Obsolete macros:
AU_DEFUN([AC_LIBTOOL_CXX], [LT_LANG(C++)])
AU_DEFUN([AC_LIBTOOL_F77], [LT_LANG(Fortran 77)])
AU_DEFUN([AC_LIBTOOL_FC], [LT_LANG(Fortran)])
AU_DEFUN([AC_LIBTOOL_GCJ], [LT_LANG(Java)])
AU_DEFUN([AC_LIBTOOL_RC], [LT_LANG(Windows Resource)])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_CXX], [])
dnl AC_DEFUN([AC_LIBTOOL_F77], [])
dnl AC_DEFUN([AC_LIBTOOL_FC], [])
dnl AC_DEFUN([AC_LIBTOOL_GCJ], [])
dnl AC_DEFUN([AC_LIBTOOL_RC], [])
# _LT_TAG_COMPILER
# ----------------
m4_defun([_LT_TAG_COMPILER],
[AC_REQUIRE([AC_PROG_CC])dnl
_LT_DECL([LTCC], [CC], [1], [A C compiler])dnl
_LT_DECL([LTCFLAGS], [CFLAGS], [1], [LTCC compiler flags])dnl
_LT_TAGDECL([CC], [compiler], [1], [A language specific compiler])dnl
_LT_TAGDECL([with_gcc], [GCC], [0], [Is the compiler the GNU compiler?])dnl
# If no C compiler was specified, use CC.
LTCC=${LTCC-"$CC"}
# If no C compiler flags were specified, use CFLAGS.
LTCFLAGS=${LTCFLAGS-"$CFLAGS"}
# Allow CC to be a program name with arguments.
compiler=$CC
])# _LT_TAG_COMPILER
# _LT_COMPILER_BOILERPLATE
# ------------------------
# Check for compiler boilerplate output or warnings with
# the simple compiler test code.
m4_defun([_LT_COMPILER_BOILERPLATE],
[m4_require([_LT_DECL_SED])dnl
ac_outfile=conftest.$ac_objext
echo "$lt_simple_compile_test_code" >conftest.$ac_ext
eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err
_lt_compiler_boilerplate=`cat conftest.err`
$RM conftest*
])# _LT_COMPILER_BOILERPLATE
# _LT_LINKER_BOILERPLATE
# ----------------------
# Check for linker boilerplate output or warnings with
# the simple link test code.
m4_defun([_LT_LINKER_BOILERPLATE],
[m4_require([_LT_DECL_SED])dnl
ac_outfile=conftest.$ac_objext
echo "$lt_simple_link_test_code" >conftest.$ac_ext
eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err
_lt_linker_boilerplate=`cat conftest.err`
$RM -r conftest*
])# _LT_LINKER_BOILERPLATE
# _LT_REQUIRED_DARWIN_CHECKS
# -------------------------
m4_defun_once([_LT_REQUIRED_DARWIN_CHECKS],[
case $host_os in
rhapsody* | darwin*)
AC_CHECK_TOOL([DSYMUTIL], [dsymutil], [:])
AC_CHECK_TOOL([NMEDIT], [nmedit], [:])
AC_CHECK_TOOL([LIPO], [lipo], [:])
AC_CHECK_TOOL([OTOOL], [otool], [:])
AC_CHECK_TOOL([OTOOL64], [otool64], [:])
_LT_DECL([], [DSYMUTIL], [1],
[Tool to manipulate archived DWARF debug symbol files on Mac OS X])
_LT_DECL([], [NMEDIT], [1],
[Tool to change global to local symbols on Mac OS X])
_LT_DECL([], [LIPO], [1],
[Tool to manipulate fat objects and archives on Mac OS X])
_LT_DECL([], [OTOOL], [1],
[ldd/readelf like tool for Mach-O binaries on Mac OS X])
_LT_DECL([], [OTOOL64], [1],
[ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4])
AC_CACHE_CHECK([for -single_module linker flag],[lt_cv_apple_cc_single_mod],
[lt_cv_apple_cc_single_mod=no
if test -z "$LT_MULTI_MODULE"; then
# By default we will add the -single_module flag. You can override
# by either setting the environment variable LT_MULTI_MODULE
# non-empty at configure time, or by adding -multi_module to the
# link flags.
rm -rf libconftest.dylib*
echo "int foo(void){return 1;}" > conftest.c
echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \
-dynamiclib -Wl,-single_module conftest.c" >&AS_MESSAGE_LOG_FD
$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \
-dynamiclib -Wl,-single_module conftest.c 2>conftest.err
_lt_result=$?
# If there is a non-empty error log, and "single_module"
# appears in it, assume the flag caused a linker warning
if test -s conftest.err && $GREP single_module conftest.err; then
cat conftest.err >&AS_MESSAGE_LOG_FD
# Otherwise, if the output was created with a 0 exit code from
# the compiler, it worked.
elif test -f libconftest.dylib && test 0 = "$_lt_result"; then
lt_cv_apple_cc_single_mod=yes
else
cat conftest.err >&AS_MESSAGE_LOG_FD
fi
rm -rf libconftest.dylib*
rm -f conftest.*
fi])
AC_CACHE_CHECK([for -exported_symbols_list linker flag],
[lt_cv_ld_exported_symbols_list],
[lt_cv_ld_exported_symbols_list=no
save_LDFLAGS=$LDFLAGS
echo "_main" > conftest.sym
LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym"
AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])],
[lt_cv_ld_exported_symbols_list=yes],
[lt_cv_ld_exported_symbols_list=no])
LDFLAGS=$save_LDFLAGS
])
AC_CACHE_CHECK([for -force_load linker flag],[lt_cv_ld_force_load],
[lt_cv_ld_force_load=no
cat > conftest.c << _LT_EOF
int forced_loaded() { return 2;}
_LT_EOF
echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&AS_MESSAGE_LOG_FD
$LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD
echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD
$AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD
echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD
$RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD
cat > conftest.c << _LT_EOF
int main() { return 0;}
_LT_EOF
echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&AS_MESSAGE_LOG_FD
$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err
_lt_result=$?
if test -s conftest.err && $GREP force_load conftest.err; then
cat conftest.err >&AS_MESSAGE_LOG_FD
elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then
lt_cv_ld_force_load=yes
else
cat conftest.err >&AS_MESSAGE_LOG_FD
fi
rm -f conftest.err libconftest.a conftest conftest.c
rm -rf conftest.dSYM
])
case $host_os in
rhapsody* | darwin1.[[012]])
_lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
darwin1.*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
darwin*) # darwin 5.x on
# if running on 10.5 or later, the deployment target defaults
# to the OS version, if on x86, and 10.4, the deployment
# target defaults to 10.4. Don't you love it?
case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
10.0,*86*-darwin8*|10.0,*-darwin[[91]]*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
10.[[012]][[,.]]*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
10.*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
esac
;;
esac
if test yes = "$lt_cv_apple_cc_single_mod"; then
_lt_dar_single_mod='$single_module'
fi
if test yes = "$lt_cv_ld_exported_symbols_list"; then
_lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym'
else
_lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib'
fi
if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then
_lt_dsymutil='~$DSYMUTIL $lib || :'
else
_lt_dsymutil=
fi
;;
esac
])
# _LT_DARWIN_LINKER_FEATURES([TAG])
# ---------------------------------
# Checks for linker and compiler features on darwin
m4_defun([_LT_DARWIN_LINKER_FEATURES],
[
m4_require([_LT_REQUIRED_DARWIN_CHECKS])
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_automatic, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
if test yes = "$lt_cv_ld_force_load"; then
_LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`'
m4_case([$1], [F77], [_LT_TAGVAR(compiler_needs_object, $1)=yes],
[FC], [_LT_TAGVAR(compiler_needs_object, $1)=yes])
else
_LT_TAGVAR(whole_archive_flag_spec, $1)=''
fi
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=$_lt_dar_allow_undefined
case $cc_basename in
ifort*|nagfor*) _lt_dar_can_shared=yes ;;
*) _lt_dar_can_shared=$GCC ;;
esac
if test yes = "$_lt_dar_can_shared"; then
output_verbose_link_cmd=func_echo_all
_LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil"
_LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil"
_LT_TAGVAR(archive_expsym_cmds, $1)="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil"
_LT_TAGVAR(module_expsym_cmds, $1)="sed -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil"
m4_if([$1], [CXX],
[ if test yes != "$lt_cv_apple_cc_single_mod"; then
_LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dsymutil"
_LT_TAGVAR(archive_expsym_cmds, $1)="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dar_export_syms$_lt_dsymutil"
fi
],[])
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
])
# _LT_SYS_MODULE_PATH_AIX([TAGNAME])
# ----------------------------------
# Links a minimal program and checks the executable
# for the system default hardcoded library path. In most cases,
# this is /usr/lib:/lib, but when the MPI compilers are used
# the location of the communication and MPI libs are included too.
# If we don't find anything, use the default library path according
# to the aix ld manual.
# Store the results from the different compilers for each TAGNAME.
# Allow to override them for all tags through lt_cv_aix_libpath.
m4_defun([_LT_SYS_MODULE_PATH_AIX],
[m4_require([_LT_DECL_SED])dnl
if test set = "${lt_cv_aix_libpath+set}"; then
aix_libpath=$lt_cv_aix_libpath
else
AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])],
[AC_LINK_IFELSE([AC_LANG_PROGRAM],[
lt_aix_libpath_sed='[
/Import File Strings/,/^$/ {
/^0/ {
s/^0 *\([^ ]*\) *$/\1/
p
}
}]'
_LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
# Check for a 64-bit object if we didn't find anything.
if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
_LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
fi],[])
if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
_LT_TAGVAR([lt_cv_aix_libpath_], [$1])=/usr/lib:/lib
fi
])
aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])
fi
])# _LT_SYS_MODULE_PATH_AIX
# _LT_SHELL_INIT(ARG)
# -------------------
m4_define([_LT_SHELL_INIT],
[m4_divert_text([M4SH-INIT], [$1
])])# _LT_SHELL_INIT
# _LT_PROG_ECHO_BACKSLASH
# -----------------------
# Find how we can fake an echo command that does not interpret backslash.
# In particular, with Autoconf 2.60 or later we add some code to the start
# of the generated configure script that will find a shell with a builtin
# printf (that we can use as an echo command).
m4_defun([_LT_PROG_ECHO_BACKSLASH],
[ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
AC_MSG_CHECKING([how to print strings])
# Test print first, because it will be a builtin if present.
if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
ECHO='print -r --'
elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
ECHO='printf %s\n'
else
# Use this function as a fallback that always works.
func_fallback_echo ()
{
eval 'cat <<_LTECHO_EOF
$[]1
_LTECHO_EOF'
}
ECHO='func_fallback_echo'
fi
# func_echo_all arg...
# Invoke $ECHO with all args, space-separated.
func_echo_all ()
{
$ECHO "$*"
}
case $ECHO in
printf*) AC_MSG_RESULT([printf]) ;;
print*) AC_MSG_RESULT([print -r]) ;;
*) AC_MSG_RESULT([cat]) ;;
esac
m4_ifdef([_AS_DETECT_SUGGESTED],
[_AS_DETECT_SUGGESTED([
test -n "${ZSH_VERSION+set}${BASH_VERSION+set}" || (
ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
PATH=/empty FPATH=/empty; export PATH FPATH
test "X`printf %s $ECHO`" = "X$ECHO" \
|| test "X`print -r -- $ECHO`" = "X$ECHO" )])])
_LT_DECL([], [SHELL], [1], [Shell to use when invoking shell scripts])
_LT_DECL([], [ECHO], [1], [An echo program that protects backslashes])
])# _LT_PROG_ECHO_BACKSLASH
# _LT_WITH_SYSROOT
# ----------------
AC_DEFUN([_LT_WITH_SYSROOT],
[AC_MSG_CHECKING([for sysroot])
AC_ARG_WITH([sysroot],
[AS_HELP_STRING([--with-sysroot@<:@=DIR@:>@],
[Search for dependent libraries within DIR (or the compiler's sysroot
if not specified).])],
[], [with_sysroot=no])
dnl lt_sysroot will always be passed unquoted. We quote it here
dnl in case the user passed a directory name.
lt_sysroot=
case $with_sysroot in #(
yes)
if test yes = "$GCC"; then
lt_sysroot=`$CC --print-sysroot 2>/dev/null`
fi
;; #(
/*)
lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"`
;; #(
no|'')
;; #(
*)
AC_MSG_RESULT([$with_sysroot])
AC_MSG_ERROR([The sysroot must be an absolute path.])
;;
esac
AC_MSG_RESULT([${lt_sysroot:-no}])
_LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl
[dependent libraries, and where our libraries should be installed.])])
# _LT_ENABLE_LOCK
# ---------------
m4_defun([_LT_ENABLE_LOCK],
[AC_ARG_ENABLE([libtool-lock],
[AS_HELP_STRING([--disable-libtool-lock],
[avoid locking (might break parallel builds)])])
test no = "$enable_libtool_lock" || enable_libtool_lock=yes
# Some flags need to be propagated to the compiler or linker for good
# libtool support.
case $host in
ia64-*-hpux*)
# Find out what ABI is being produced by ac_compile, and set mode
# options accordingly.
echo 'int i;' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case `/usr/bin/file conftest.$ac_objext` in
*ELF-32*)
HPUX_IA64_MODE=32
;;
*ELF-64*)
HPUX_IA64_MODE=64
;;
esac
fi
rm -rf conftest*
;;
*-*-irix6*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly.
echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
if test yes = "$lt_cv_prog_gnu_ld"; then
case `/usr/bin/file conftest.$ac_objext` in
*32-bit*)
LD="${LD-ld} -melf32bsmip"
;;
*N32*)
LD="${LD-ld} -melf32bmipn32"
;;
*64-bit*)
LD="${LD-ld} -melf64bmip"
;;
esac
else
case `/usr/bin/file conftest.$ac_objext` in
*32-bit*)
LD="${LD-ld} -32"
;;
*N32*)
LD="${LD-ld} -n32"
;;
*64-bit*)
LD="${LD-ld} -64"
;;
esac
fi
fi
rm -rf conftest*
;;
mips64*-*linux*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly.
echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
emul=elf
case `/usr/bin/file conftest.$ac_objext` in
*32-bit*)
emul="${emul}32"
;;
*64-bit*)
emul="${emul}64"
;;
esac
case `/usr/bin/file conftest.$ac_objext` in
*MSB*)
emul="${emul}btsmip"
;;
*LSB*)
emul="${emul}ltsmip"
;;
esac
case `/usr/bin/file conftest.$ac_objext` in
*N32*)
emul="${emul}n32"
;;
esac
LD="${LD-ld} -m $emul"
fi
rm -rf conftest*
;;
x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \
s390*-*linux*|s390*-*tpf*|sparc*-*linux*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly. Note that the listed cases only cover the
# situations where additional linker options are needed (such as when
# doing 32-bit compilation for a host where ld defaults to 64-bit, or
# vice versa); the common cases where no linker options are needed do
# not appear in the list.
echo 'int i;' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case `/usr/bin/file conftest.o` in
*32-bit*)
case $host in
x86_64-*kfreebsd*-gnu)
LD="${LD-ld} -m elf_i386_fbsd"
;;
x86_64-*linux*)
case `/usr/bin/file conftest.o` in
*x86-64*)
LD="${LD-ld} -m elf32_x86_64"
;;
*)
LD="${LD-ld} -m elf_i386"
;;
esac
;;
powerpc64le-*linux*)
LD="${LD-ld} -m elf32lppclinux"
;;
powerpc64-*linux*)
LD="${LD-ld} -m elf32ppclinux"
;;
s390x-*linux*)
LD="${LD-ld} -m elf_s390"
;;
sparc64-*linux*)
LD="${LD-ld} -m elf32_sparc"
;;
esac
;;
*64-bit*)
case $host in
x86_64-*kfreebsd*-gnu)
LD="${LD-ld} -m elf_x86_64_fbsd"
;;
x86_64-*linux*)
LD="${LD-ld} -m elf_x86_64"
;;
powerpcle-*linux*)
LD="${LD-ld} -m elf64lppc"
;;
powerpc-*linux*)
LD="${LD-ld} -m elf64ppc"
;;
s390*-*linux*|s390*-*tpf*)
LD="${LD-ld} -m elf64_s390"
;;
sparc*-*linux*)
LD="${LD-ld} -m elf64_sparc"
;;
esac
;;
esac
fi
rm -rf conftest*
;;
*-*-sco3.2v5*)
# On SCO OpenServer 5, we need -belf to get full-featured binaries.
SAVE_CFLAGS=$CFLAGS
CFLAGS="$CFLAGS -belf"
AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf,
[AC_LANG_PUSH(C)
AC_LINK_IFELSE([AC_LANG_PROGRAM([[]],[[]])],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no])
AC_LANG_POP])
if test yes != "$lt_cv_cc_needs_belf"; then
# this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
CFLAGS=$SAVE_CFLAGS
fi
;;
*-*solaris*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly.
echo 'int i;' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case `/usr/bin/file conftest.o` in
*64-bit*)
case $lt_cv_prog_gnu_ld in
yes*)
case $host in
i?86-*-solaris*|x86_64-*-solaris*)
LD="${LD-ld} -m elf_x86_64"
;;
sparc*-*-solaris*)
LD="${LD-ld} -m elf64_sparc"
;;
esac
# GNU ld 2.21 introduced _sol2 emulations. Use them if available.
if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then
LD=${LD-ld}_sol2
fi
;;
*)
if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then
LD="${LD-ld} -64"
fi
;;
esac
;;
esac
fi
rm -rf conftest*
;;
esac
need_locks=$enable_libtool_lock
])# _LT_ENABLE_LOCK
# _LT_PROG_AR
# -----------
m4_defun([_LT_PROG_AR],
[AC_CHECK_TOOLS(AR, [ar], false)
: ${AR=ar}
: ${AR_FLAGS=cru}
_LT_DECL([], [AR], [1], [The archiver])
_LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive])
AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file],
[lt_cv_ar_at_file=no
AC_COMPILE_IFELSE([AC_LANG_PROGRAM],
[echo conftest.$ac_objext > conftest.lst
lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD'
AC_TRY_EVAL([lt_ar_try])
if test 0 -eq "$ac_status"; then
# Ensure the archiver fails upon bogus file names.
rm -f conftest.$ac_objext libconftest.a
AC_TRY_EVAL([lt_ar_try])
if test 0 -ne "$ac_status"; then
lt_cv_ar_at_file=@
fi
fi
rm -f conftest.* libconftest.a
])
])
if test no = "$lt_cv_ar_at_file"; then
archiver_list_spec=
else
archiver_list_spec=$lt_cv_ar_at_file
fi
_LT_DECL([], [archiver_list_spec], [1],
[How to feed a file listing to the archiver])
])# _LT_PROG_AR
# _LT_CMD_OLD_ARCHIVE
# -------------------
m4_defun([_LT_CMD_OLD_ARCHIVE],
[_LT_PROG_AR
AC_CHECK_TOOL(STRIP, strip, :)
test -z "$STRIP" && STRIP=:
_LT_DECL([], [STRIP], [1], [A symbol stripping program])
AC_CHECK_TOOL(RANLIB, ranlib, :)
test -z "$RANLIB" && RANLIB=:
_LT_DECL([], [RANLIB], [1],
[Commands used to install an old-style archive])
# Determine commands to create old-style static archives.
old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs'
old_postinstall_cmds='chmod 644 $oldlib'
old_postuninstall_cmds=
if test -n "$RANLIB"; then
case $host_os in
bitrig* | openbsd*)
old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib"
;;
*)
old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib"
;;
esac
old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib"
fi
case $host_os in
darwin*)
lock_old_archive_extraction=yes ;;
*)
lock_old_archive_extraction=no ;;
esac
_LT_DECL([], [old_postinstall_cmds], [2])
_LT_DECL([], [old_postuninstall_cmds], [2])
_LT_TAGDECL([], [old_archive_cmds], [2],
[Commands used to build an old-style archive])
_LT_DECL([], [lock_old_archive_extraction], [0],
[Whether to use a lock for old archive extraction])
])# _LT_CMD_OLD_ARCHIVE
# _LT_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS,
# [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE])
# ----------------------------------------------------------------
# Check whether the given compiler option works
AC_DEFUN([_LT_COMPILER_OPTION],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_SED])dnl
AC_CACHE_CHECK([$1], [$2],
[$2=no
m4_if([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4])
echo "$lt_simple_compile_test_code" > conftest.$ac_ext
lt_compiler_flag="$3" ## exclude from sc_useless_quotes_in_assignment
# Insert the option either (1) after the last *FLAGS variable, or
# (2) before a word containing "conftest.", or (3) at the end.
# Note that $ac_compile itself does not contain backslashes and begins
# with a dollar sign (not a hyphen), so the echo should work correctly.
# The option is referenced via a variable to avoid confusing sed.
lt_compile=`echo "$ac_compile" | $SED \
-e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
-e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \
-e 's:$: $lt_compiler_flag:'`
(eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD)
(eval "$lt_compile" 2>conftest.err)
ac_status=$?
cat conftest.err >&AS_MESSAGE_LOG_FD
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
if (exit $ac_status) && test -s "$ac_outfile"; then
# The compiler can only warn and ignore the option if not recognized
# So say no if there are warnings other than the usual output.
$ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp
$SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then
$2=yes
fi
fi
$RM conftest*
])
if test yes = "[$]$2"; then
m4_if([$5], , :, [$5])
else
m4_if([$6], , :, [$6])
fi
])# _LT_COMPILER_OPTION
# Old name:
AU_ALIAS([AC_LIBTOOL_COMPILER_OPTION], [_LT_COMPILER_OPTION])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], [])
# _LT_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS,
# [ACTION-SUCCESS], [ACTION-FAILURE])
# ----------------------------------------------------
# Check whether the given linker option works
AC_DEFUN([_LT_LINKER_OPTION],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_SED])dnl
AC_CACHE_CHECK([$1], [$2],
[$2=no
save_LDFLAGS=$LDFLAGS
LDFLAGS="$LDFLAGS $3"
echo "$lt_simple_link_test_code" > conftest.$ac_ext
if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then
# The linker can only warn and ignore the option if not recognized
# So say no if there are warnings
if test -s conftest.err; then
# Append any errors to the config.log.
cat conftest.err 1>&AS_MESSAGE_LOG_FD
$ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp
$SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
if diff conftest.exp conftest.er2 >/dev/null; then
$2=yes
fi
else
$2=yes
fi
fi
$RM -r conftest*
LDFLAGS=$save_LDFLAGS
])
if test yes = "[$]$2"; then
m4_if([$4], , :, [$4])
else
m4_if([$5], , :, [$5])
fi
])# _LT_LINKER_OPTION
# Old name:
AU_ALIAS([AC_LIBTOOL_LINKER_OPTION], [_LT_LINKER_OPTION])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], [])
# LT_CMD_MAX_LEN
#---------------
AC_DEFUN([LT_CMD_MAX_LEN],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
# find the maximum length of command line arguments
AC_MSG_CHECKING([the maximum length of command line arguments])
AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl
i=0
teststring=ABCD
case $build_os in
msdosdjgpp*)
# On DJGPP, this test can blow up pretty badly due to problems in libc
# (any single argument exceeding 2000 bytes causes a buffer overrun
# during glob expansion). Even if it were fixed, the result of this
# check would be larger than it should be.
lt_cv_sys_max_cmd_len=12288; # 12K is about right
;;
gnu*)
# Under GNU Hurd, this test is not required because there is
# no limit to the length of command line arguments.
# Libtool will interpret -1 as no limit whatsoever
lt_cv_sys_max_cmd_len=-1;
;;
cygwin* | mingw* | cegcc*)
# On Win9x/ME, this test blows up -- it succeeds, but takes
# about 5 minutes as the teststring grows exponentially.
# Worse, since 9x/ME are not pre-emptively multitasking,
# you end up with a "frozen" computer, even though with patience
# the test eventually succeeds (with a max line length of 256k).
# Instead, let's just punt: use the minimum linelength reported by
# all of the supported platforms: 8192 (on NT/2K/XP).
lt_cv_sys_max_cmd_len=8192;
;;
mint*)
# On MiNT this can take a long time and run out of memory.
lt_cv_sys_max_cmd_len=8192;
;;
amigaos*)
# On AmigaOS with pdksh, this test takes hours, literally.
# So we just punt and use a minimum line length of 8192.
lt_cv_sys_max_cmd_len=8192;
;;
bitrig* | darwin* | dragonfly* | freebsd* | netbsd* | openbsd*)
# This has been around since 386BSD, at least. Likely further.
if test -x /sbin/sysctl; then
lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax`
elif test -x /usr/sbin/sysctl; then
lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax`
else
lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs
fi
# And add a safety zone
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4`
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3`
;;
interix*)
# We know the value 262144 and hardcode it with a safety zone (like BSD)
lt_cv_sys_max_cmd_len=196608
;;
os2*)
# The test takes a long time on OS/2.
lt_cv_sys_max_cmd_len=8192
;;
osf*)
# Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure
# due to this test when exec_disable_arg_limit is 1 on Tru64. It is not
# nice to cause kernel panics so lets avoid the loop below.
# First set a reasonable default.
lt_cv_sys_max_cmd_len=16384
#
if test -x /sbin/sysconfig; then
case `/sbin/sysconfig -q proc exec_disable_arg_limit` in
*1*) lt_cv_sys_max_cmd_len=-1 ;;
esac
fi
;;
sco3.2v5*)
lt_cv_sys_max_cmd_len=102400
;;
sysv5* | sco5v6* | sysv4.2uw2*)
kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null`
if test -n "$kargmax"; then
lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[[ ]]//'`
else
lt_cv_sys_max_cmd_len=32768
fi
;;
*)
lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null`
if test -n "$lt_cv_sys_max_cmd_len" && \
test undefined != "$lt_cv_sys_max_cmd_len"; then
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4`
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3`
else
# Make teststring a little bigger before we do anything with it.
# a 1K string should be a reasonable start.
for i in 1 2 3 4 5 6 7 8; do
teststring=$teststring$teststring
done
SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}}
# If test is not a shell built-in, we'll probably end up computing a
# maximum length that is only half of the actual maximum length, but
# we can't tell.
while { test X`env echo "$teststring$teststring" 2>/dev/null` \
= "X$teststring$teststring"; } >/dev/null 2>&1 &&
test 17 != "$i" # 1/2 MB should be enough
do
i=`expr $i + 1`
teststring=$teststring$teststring
done
# Only check the string length outside the loop.
lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1`
teststring=
# Add a significant safety factor because C++ compilers can tack on
# massive amounts of additional arguments before passing them to the
# linker. It appears as though 1/2 is a usable value.
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2`
fi
;;
esac
])
if test -n "$lt_cv_sys_max_cmd_len"; then
AC_MSG_RESULT($lt_cv_sys_max_cmd_len)
else
AC_MSG_RESULT(none)
fi
max_cmd_len=$lt_cv_sys_max_cmd_len
_LT_DECL([], [max_cmd_len], [0],
[What is the maximum length of a command?])
])# LT_CMD_MAX_LEN
# Old name:
AU_ALIAS([AC_LIBTOOL_SYS_MAX_CMD_LEN], [LT_CMD_MAX_LEN])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], [])
# _LT_HEADER_DLFCN
# ----------------
m4_defun([_LT_HEADER_DLFCN],
[AC_CHECK_HEADERS([dlfcn.h], [], [], [AC_INCLUDES_DEFAULT])dnl
])# _LT_HEADER_DLFCN
# _LT_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE,
# ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING)
# ----------------------------------------------------------------
m4_defun([_LT_TRY_DLOPEN_SELF],
[m4_require([_LT_HEADER_DLFCN])dnl
if test yes = "$cross_compiling"; then :
[$4]
else
lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
lt_status=$lt_dlunknown
cat > conftest.$ac_ext <<_LT_EOF
[#line $LINENO "configure"
#include "confdefs.h"
#if HAVE_DLFCN_H
#include
#endif
#include
#ifdef RTLD_GLOBAL
# define LT_DLGLOBAL RTLD_GLOBAL
#else
# ifdef DL_GLOBAL
# define LT_DLGLOBAL DL_GLOBAL
# else
# define LT_DLGLOBAL 0
# endif
#endif
/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
find out it does not work in some platform. */
#ifndef LT_DLLAZY_OR_NOW
# ifdef RTLD_LAZY
# define LT_DLLAZY_OR_NOW RTLD_LAZY
# else
# ifdef DL_LAZY
# define LT_DLLAZY_OR_NOW DL_LAZY
# else
# ifdef RTLD_NOW
# define LT_DLLAZY_OR_NOW RTLD_NOW
# else
# ifdef DL_NOW
# define LT_DLLAZY_OR_NOW DL_NOW
# else
# define LT_DLLAZY_OR_NOW 0
# endif
# endif
# endif
# endif
#endif
/* When -fvisibility=hidden is used, assume the code has been annotated
correspondingly for the symbols needed. */
#if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
int fnord () __attribute__((visibility("default")));
#endif
int fnord () { return 42; }
int main ()
{
void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
int status = $lt_dlunknown;
if (self)
{
if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
else
{
if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
else puts (dlerror ());
}
/* dlclose (self); */
}
else
puts (dlerror ());
return status;
}]
_LT_EOF
if AC_TRY_EVAL(ac_link) && test -s "conftest$ac_exeext" 2>/dev/null; then
(./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null
lt_status=$?
case x$lt_status in
x$lt_dlno_uscore) $1 ;;
x$lt_dlneed_uscore) $2 ;;
x$lt_dlunknown|x*) $3 ;;
esac
else :
# compilation failed
$3
fi
fi
rm -fr conftest*
])# _LT_TRY_DLOPEN_SELF
# LT_SYS_DLOPEN_SELF
# ------------------
AC_DEFUN([LT_SYS_DLOPEN_SELF],
[m4_require([_LT_HEADER_DLFCN])dnl
if test yes != "$enable_dlopen"; then
enable_dlopen=unknown
enable_dlopen_self=unknown
enable_dlopen_self_static=unknown
else
lt_cv_dlopen=no
lt_cv_dlopen_libs=
case $host_os in
beos*)
lt_cv_dlopen=load_add_on
lt_cv_dlopen_libs=
lt_cv_dlopen_self=yes
;;
mingw* | pw32* | cegcc*)
lt_cv_dlopen=LoadLibrary
lt_cv_dlopen_libs=
;;
cygwin*)
lt_cv_dlopen=dlopen
lt_cv_dlopen_libs=
;;
darwin*)
# if libdl is installed we need to link against it
AC_CHECK_LIB([dl], [dlopen],
[lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl],[
lt_cv_dlopen=dyld
lt_cv_dlopen_libs=
lt_cv_dlopen_self=yes
])
;;
tpf*)
# Don't try to run any link tests for TPF. We know it's impossible
# because TPF is a cross-compiler, and we know how we open DSOs.
lt_cv_dlopen=dlopen
lt_cv_dlopen_libs=
lt_cv_dlopen_self=no
;;
*)
AC_CHECK_FUNC([shl_load],
[lt_cv_dlopen=shl_load],
[AC_CHECK_LIB([dld], [shl_load],
[lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld],
[AC_CHECK_FUNC([dlopen],
[lt_cv_dlopen=dlopen],
[AC_CHECK_LIB([dl], [dlopen],
[lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl],
[AC_CHECK_LIB([svld], [dlopen],
[lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld],
[AC_CHECK_LIB([dld], [dld_link],
[lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld])
])
])
])
])
])
;;
esac
if test no = "$lt_cv_dlopen"; then
enable_dlopen=no
else
enable_dlopen=yes
fi
case $lt_cv_dlopen in
dlopen)
save_CPPFLAGS=$CPPFLAGS
test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
save_LDFLAGS=$LDFLAGS
wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
save_LIBS=$LIBS
LIBS="$lt_cv_dlopen_libs $LIBS"
AC_CACHE_CHECK([whether a program can dlopen itself],
lt_cv_dlopen_self, [dnl
_LT_TRY_DLOPEN_SELF(
lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes,
lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross)
])
if test yes = "$lt_cv_dlopen_self"; then
wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\"
AC_CACHE_CHECK([whether a statically linked program can dlopen itself],
lt_cv_dlopen_self_static, [dnl
_LT_TRY_DLOPEN_SELF(
lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes,
lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross)
])
fi
CPPFLAGS=$save_CPPFLAGS
LDFLAGS=$save_LDFLAGS
LIBS=$save_LIBS
;;
esac
case $lt_cv_dlopen_self in
yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
*) enable_dlopen_self=unknown ;;
esac
case $lt_cv_dlopen_self_static in
yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
*) enable_dlopen_self_static=unknown ;;
esac
fi
_LT_DECL([dlopen_support], [enable_dlopen], [0],
[Whether dlopen is supported])
_LT_DECL([dlopen_self], [enable_dlopen_self], [0],
[Whether dlopen of programs is supported])
_LT_DECL([dlopen_self_static], [enable_dlopen_self_static], [0],
[Whether dlopen of statically linked programs is supported])
])# LT_SYS_DLOPEN_SELF
# Old name:
AU_ALIAS([AC_LIBTOOL_DLOPEN_SELF], [LT_SYS_DLOPEN_SELF])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], [])
# _LT_COMPILER_C_O([TAGNAME])
# ---------------------------
# Check to see if options -c and -o are simultaneously supported by compiler.
# This macro does not hard code the compiler like AC_PROG_CC_C_O.
m4_defun([_LT_COMPILER_C_O],
[m4_require([_LT_DECL_SED])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_TAG_COMPILER])dnl
AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext],
[_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)],
[_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no
$RM -r conftest 2>/dev/null
mkdir conftest
cd conftest
mkdir out
echo "$lt_simple_compile_test_code" > conftest.$ac_ext
lt_compiler_flag="-o out/conftest2.$ac_objext"
# Insert the option either (1) after the last *FLAGS variable, or
# (2) before a word containing "conftest.", or (3) at the end.
# Note that $ac_compile itself does not contain backslashes and begins
# with a dollar sign (not a hyphen), so the echo should work correctly.
lt_compile=`echo "$ac_compile" | $SED \
-e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
-e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \
-e 's:$: $lt_compiler_flag:'`
(eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD)
(eval "$lt_compile" 2>out/conftest.err)
ac_status=$?
cat out/conftest.err >&AS_MESSAGE_LOG_FD
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
if (exit $ac_status) && test -s out/conftest2.$ac_objext
then
# The compiler can only warn and ignore the option if not recognized
# So say no if there are warnings
$ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp
$SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2
if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then
_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes
fi
fi
chmod u+w . 2>&AS_MESSAGE_LOG_FD
$RM conftest*
# SGI C++ compiler will create directory out/ii_files/ for
# template instantiation
test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files
$RM out/* && rmdir out
cd ..
$RM -r conftest
$RM conftest*
])
_LT_TAGDECL([compiler_c_o], [lt_cv_prog_compiler_c_o], [1],
[Does compiler simultaneously support -c and -o options?])
])# _LT_COMPILER_C_O
# _LT_COMPILER_FILE_LOCKS([TAGNAME])
# ----------------------------------
# Check to see if we can do hard links to lock some files if needed
m4_defun([_LT_COMPILER_FILE_LOCKS],
[m4_require([_LT_ENABLE_LOCK])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
_LT_COMPILER_C_O([$1])
hard_links=nottested
if test no = "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" && test no != "$need_locks"; then
# do not overwrite the value of need_locks provided by the user
AC_MSG_CHECKING([if we can lock with hard links])
hard_links=yes
$RM conftest*
ln conftest.a conftest.b 2>/dev/null && hard_links=no
touch conftest.a
ln conftest.a conftest.b 2>&5 || hard_links=no
ln conftest.a conftest.b 2>/dev/null && hard_links=no
AC_MSG_RESULT([$hard_links])
if test no = "$hard_links"; then
AC_MSG_WARN(['$CC' does not support '-c -o', so 'make -j' may be unsafe])
need_locks=warn
fi
else
need_locks=no
fi
_LT_DECL([], [need_locks], [1], [Must we lock files when doing compilation?])
])# _LT_COMPILER_FILE_LOCKS
# _LT_CHECK_OBJDIR
# ----------------
m4_defun([_LT_CHECK_OBJDIR],
[AC_CACHE_CHECK([for objdir], [lt_cv_objdir],
[rm -f .libs 2>/dev/null
mkdir .libs 2>/dev/null
if test -d .libs; then
lt_cv_objdir=.libs
else
# MS-DOS does not allow filenames that begin with a dot.
lt_cv_objdir=_libs
fi
rmdir .libs 2>/dev/null])
objdir=$lt_cv_objdir
_LT_DECL([], [objdir], [0],
[The name of the directory that contains temporary libtool files])dnl
m4_pattern_allow([LT_OBJDIR])dnl
AC_DEFINE_UNQUOTED([LT_OBJDIR], "$lt_cv_objdir/",
[Define to the sub-directory where libtool stores uninstalled libraries.])
])# _LT_CHECK_OBJDIR
# _LT_LINKER_HARDCODE_LIBPATH([TAGNAME])
# --------------------------------------
# Check hardcoding attributes.
m4_defun([_LT_LINKER_HARDCODE_LIBPATH],
[AC_MSG_CHECKING([how to hardcode library paths into programs])
_LT_TAGVAR(hardcode_action, $1)=
if test -n "$_LT_TAGVAR(hardcode_libdir_flag_spec, $1)" ||
test -n "$_LT_TAGVAR(runpath_var, $1)" ||
test yes = "$_LT_TAGVAR(hardcode_automatic, $1)"; then
# We can hardcode non-existent directories.
if test no != "$_LT_TAGVAR(hardcode_direct, $1)" &&
# If the only mechanism to avoid hardcoding is shlibpath_var, we
# have to relink, otherwise we might link with an installed library
# when we should be linking with a yet-to-be-installed one
## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" &&
test no != "$_LT_TAGVAR(hardcode_minus_L, $1)"; then
# Linking always hardcodes the temporary library directory.
_LT_TAGVAR(hardcode_action, $1)=relink
else
# We can link without hardcoding, and we can hardcode nonexisting dirs.
_LT_TAGVAR(hardcode_action, $1)=immediate
fi
else
# We cannot hardcode anything, or else we can only hardcode existing
# directories.
_LT_TAGVAR(hardcode_action, $1)=unsupported
fi
AC_MSG_RESULT([$_LT_TAGVAR(hardcode_action, $1)])
if test relink = "$_LT_TAGVAR(hardcode_action, $1)" ||
test yes = "$_LT_TAGVAR(inherit_rpath, $1)"; then
# Fast installation is not supported
enable_fast_install=no
elif test yes = "$shlibpath_overrides_runpath" ||
test no = "$enable_shared"; then
# Fast installation is not necessary
enable_fast_install=needless
fi
_LT_TAGDECL([], [hardcode_action], [0],
[How to hardcode a shared library path into an executable])
])# _LT_LINKER_HARDCODE_LIBPATH
# _LT_CMD_STRIPLIB
# ----------------
m4_defun([_LT_CMD_STRIPLIB],
[m4_require([_LT_DECL_EGREP])
striplib=
old_striplib=
AC_MSG_CHECKING([whether stripping libraries is possible])
if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then
test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
test -z "$striplib" && striplib="$STRIP --strip-unneeded"
AC_MSG_RESULT([yes])
else
# FIXME - insert some real tests, host_os isn't really good enough
case $host_os in
darwin*)
if test -n "$STRIP"; then
striplib="$STRIP -x"
old_striplib="$STRIP -S"
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
fi
;;
*)
AC_MSG_RESULT([no])
;;
esac
fi
_LT_DECL([], [old_striplib], [1], [Commands to strip libraries])
_LT_DECL([], [striplib], [1])
])# _LT_CMD_STRIPLIB
# _LT_PREPARE_MUNGE_PATH_LIST
# ---------------------------
# Make sure func_munge_path_list() is defined correctly.
m4_defun([_LT_PREPARE_MUNGE_PATH_LIST],
[[# func_munge_path_list VARIABLE PATH
# -----------------------------------
# VARIABLE is name of variable containing _space_ separated list of
# directories to be munged by the contents of PATH, which is string
# having a format:
# "DIR[:DIR]:"
# string "DIR[ DIR]" will be prepended to VARIABLE
# ":DIR[:DIR]"
# string "DIR[ DIR]" will be appended to VARIABLE
# "DIRP[:DIRP]::[DIRA:]DIRA"
# string "DIRP[ DIRP]" will be prepended to VARIABLE and string
# "DIRA[ DIRA]" will be appended to VARIABLE
# "DIR[:DIR]"
# VARIABLE will be replaced by "DIR[ DIR]"
func_munge_path_list ()
{
case x@S|@2 in
x)
;;
*:)
eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'` \@S|@@S|@1\"
;;
x:*)
eval @S|@1=\"\@S|@@S|@1 `$ECHO @S|@2 | $SED 's/:/ /g'`\"
;;
*::*)
eval @S|@1=\"\@S|@@S|@1\ `$ECHO @S|@2 | $SED -e 's/.*:://' -e 's/:/ /g'`\"
eval @S|@1=\"`$ECHO @S|@2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \@S|@@S|@1\"
;;
*)
eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'`\"
;;
esac
}
]])# _LT_PREPARE_PATH_LIST
# _LT_SYS_DYNAMIC_LINKER([TAG])
# -----------------------------
# PORTME Fill in your ld.so characteristics
m4_defun([_LT_SYS_DYNAMIC_LINKER],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_OBJDUMP])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_CHECK_SHELL_FEATURES])dnl
m4_require([_LT_PREPARE_MUNGE_PATH_LIST])dnl
AC_MSG_CHECKING([dynamic linker characteristics])
m4_if([$1],
[], [
if test yes = "$GCC"; then
case $host_os in
darwin*) lt_awk_arg='/^libraries:/,/LR/' ;;
*) lt_awk_arg='/^libraries:/' ;;
esac
case $host_os in
mingw* | cegcc*) lt_sed_strip_eq='s|=\([[A-Za-z]]:\)|\1|g' ;;
*) lt_sed_strip_eq='s|=/|/|g' ;;
esac
lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq`
case $lt_search_path_spec in
*\;*)
# if the path contains ";" then we assume it to be the separator
# otherwise default to the standard path separator (i.e. ":") - it is
# assumed that no part of a normal pathname contains ";" but that should
# okay in the real world where ";" in dirpaths is itself problematic.
lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'`
;;
*)
lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"`
;;
esac
# Ok, now we have the path, separated by spaces, we can step through it
# and add multilib dir if necessary...
lt_tmp_lt_search_path_spec=
lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null`
# ...but if some path component already ends with the multilib dir we assume
# that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer).
case "$lt_multi_os_dir; $lt_search_path_spec " in
"/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*)
lt_multi_os_dir=
;;
esac
for lt_sys_path in $lt_search_path_spec; do
if test -d "$lt_sys_path$lt_multi_os_dir"; then
lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir"
elif test -n "$lt_multi_os_dir"; then
test -d "$lt_sys_path" && \
lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path"
fi
done
lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk '
BEGIN {RS = " "; FS = "/|\n";} {
lt_foo = "";
lt_count = 0;
for (lt_i = NF; lt_i > 0; lt_i--) {
if ($lt_i != "" && $lt_i != ".") {
if ($lt_i == "..") {
lt_count++;
} else {
if (lt_count == 0) {
lt_foo = "/" $lt_i lt_foo;
} else {
lt_count--;
}
}
}
}
if (lt_foo != "") { lt_freq[[lt_foo]]++; }
if (lt_freq[[lt_foo]] == 1) { print lt_foo; }
}'`
# AWK program above erroneously prepends '/' to C:/dos/paths
# for these hosts.
case $host_os in
mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\
$SED 's|/\([[A-Za-z]]:\)|\1|g'` ;;
esac
sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP`
else
sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
fi])
library_names_spec=
libname_spec='lib$name'
soname_spec=
shrext_cmds=.so
postinstall_cmds=
postuninstall_cmds=
finish_cmds=
finish_eval=
shlibpath_var=
shlibpath_overrides_runpath=unknown
version_type=none
dynamic_linker="$host_os ld.so"
sys_lib_dlsearch_path_spec="/lib /usr/lib"
need_lib_prefix=unknown
hardcode_into_libs=no
# when you set need_version to no, make sure it does not cause -set_version
# flags to be left without arguments
need_version=unknown
AC_ARG_VAR([LT_SYS_LIBRARY_PATH],
[User-defined run-time library search path.])
case $host_os in
aix3*)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname.a'
shlibpath_var=LIBPATH
# AIX 3 has no versioning support, so we append a major version to the name.
soname_spec='$libname$release$shared_ext$major'
;;
aix[[4-9]]*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
hardcode_into_libs=yes
if test ia64 = "$host_cpu"; then
# AIX 5 supports IA64
library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
else
# With GCC up to 2.95.x, collect2 would create an import file
# for dependence libraries. The import file would start with
# the line '#! .'. This would cause the generated library to
# depend on '.', always an invalid library. This was fixed in
# development snapshots of GCC prior to 3.0.
case $host_os in
aix4 | aix4.[[01]] | aix4.[[01]].*)
if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
echo ' yes '
echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then
:
else
can_build_shared=no
fi
;;
esac
# Using Import Files as archive members, it is possible to support
# filename-based versioning of shared library archives on AIX. While
# this would work for both with and without runtime linking, it will
# prevent static linking of such archives. So we do filename-based
# shared library versioning with .so extension only, which is used
# when both runtime linking and shared linking is enabled.
# Unfortunately, runtime linking may impact performance, so we do
# not want this to be the default eventually. Also, we use the
# versioned .so libs for executables only if there is the -brtl
# linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only.
# To allow for filename-based versioning support, we need to create
# libNAME.so.V as an archive file, containing:
# *) an Import File, referring to the versioned filename of the
# archive as well as the shared archive member, telling the
# bitwidth (32 or 64) of that shared object, and providing the
# list of exported symbols of that shared object, eventually
# decorated with the 'weak' keyword
# *) the shared object with the F_LOADONLY flag set, to really avoid
# it being seen by the linker.
# At run time we better use the real file rather than another symlink,
# but for link time we create the symlink libNAME.so -> libNAME.so.V
case $with_aix_soname,$aix_use_runtimelinking in
# AIX (on Power*) has no versioning support, so currently we cannot hardcode correct
# soname into executable. Probably we can add versioning support to
# collect2, so additional links can be useful in future.
aix,yes) # traditional libtool
dynamic_linker='AIX unversionable lib.so'
# If using run time linking (on AIX 4.2 or later) use lib.so
# instead of lib.a to let people know that these are not
# typical AIX shared libraries.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
;;
aix,no) # traditional AIX only
dynamic_linker='AIX lib.a[(]lib.so.V[)]'
# We preserve .a as extension for shared libraries through AIX4.2
# and later when we are not doing run time linking.
library_names_spec='$libname$release.a $libname.a'
soname_spec='$libname$release$shared_ext$major'
;;
svr4,*) # full svr4 only
dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)]"
library_names_spec='$libname$release$shared_ext$major $libname$shared_ext'
# We do not specify a path in Import Files, so LIBPATH fires.
shlibpath_overrides_runpath=yes
;;
*,yes) # both, prefer svr4
dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)], lib.a[(]lib.so.V[)]"
library_names_spec='$libname$release$shared_ext$major $libname$shared_ext'
# unpreferred sharedlib libNAME.a needs extra handling
postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"'
postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"'
# We do not specify a path in Import Files, so LIBPATH fires.
shlibpath_overrides_runpath=yes
;;
*,no) # both, prefer aix
dynamic_linker="AIX lib.a[(]lib.so.V[)], lib.so.V[(]$shared_archive_member_spec.o[)]"
library_names_spec='$libname$release.a $libname.a'
soname_spec='$libname$release$shared_ext$major'
# unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling
postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)'
postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"'
;;
esac
shlibpath_var=LIBPATH
fi
;;
amigaos*)
case $host_cpu in
powerpc)
# Since July 2007 AmigaOS4 officially supports .so libraries.
# When compiling the executable, add -use-dynld -Lsobjs: to the compileline.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
;;
m68k)
library_names_spec='$libname.ixlibrary $libname.a'
# Create ${libname}_ixlibrary.a entries in /sys/libs.
finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
;;
esac
;;
beos*)
library_names_spec='$libname$shared_ext'
dynamic_linker="$host_os ld.so"
shlibpath_var=LIBRARY_PATH
;;
bsdi[[45]]*)
version_type=linux # correct to gnu/linux during the next big refactor
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
shlibpath_var=LD_LIBRARY_PATH
sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
# the default ld.so.conf also contains /usr/contrib/lib and
# /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
# libtool to hard-code these into programs
;;
cygwin* | mingw* | pw32* | cegcc*)
version_type=windows
shrext_cmds=.dll
need_version=no
need_lib_prefix=no
case $GCC,$cc_basename in
yes,*)
# gcc
library_names_spec='$libname.dll.a'
# DLL is installed to $(libdir)/../bin by postinstall_cmds
postinstall_cmds='base_file=`basename \$file`~
dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~
dldir=$destdir/`dirname \$dlpath`~
test -d \$dldir || mkdir -p \$dldir~
$install_prog $dir/$dlname \$dldir/$dlname~
chmod a+x \$dldir/$dlname~
if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then
eval '\''$striplib \$dldir/$dlname'\'' || exit \$?;
fi'
postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
dlpath=$dir/\$dldll~
$RM \$dlpath'
shlibpath_overrides_runpath=yes
case $host_os in
cygwin*)
# Cygwin DLLs use 'cyg' prefix rather than 'lib'
soname_spec='`echo $libname | sed -e 's/^lib/cyg/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
m4_if([$1], [],[
sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"])
;;
mingw* | cegcc*)
# MinGW DLLs use traditional 'lib' prefix
soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
;;
pw32*)
# pw32 DLLs use 'pw' prefix rather than 'lib'
library_names_spec='`echo $libname | sed -e 's/^lib/pw/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
;;
esac
dynamic_linker='Win32 ld.exe'
;;
*,cl*)
# Native MSVC
libname_spec='$name'
soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
library_names_spec='$libname.dll.lib'
case $build_os in
mingw*)
sys_lib_search_path_spec=
lt_save_ifs=$IFS
IFS=';'
for lt_path in $LIB
do
IFS=$lt_save_ifs
# Let DOS variable expansion print the short 8.3 style file name.
lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
done
IFS=$lt_save_ifs
# Convert to MSYS style.
sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'`
;;
cygwin*)
# Convert to unix form, then to dos form, then back to unix form
# but this time dos style (no spaces!) so that the unix form looks
# like /cygdrive/c/PROGRA~1:/cygdr...
sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
;;
*)
sys_lib_search_path_spec=$LIB
if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then
# It is most probably a Windows format PATH.
sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
else
sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
fi
# FIXME: find the short name or the path components, as spaces are
# common. (e.g. "Program Files" -> "PROGRA~1")
;;
esac
# DLL is installed to $(libdir)/../bin by postinstall_cmds
postinstall_cmds='base_file=`basename \$file`~
dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~
dldir=$destdir/`dirname \$dlpath`~
test -d \$dldir || mkdir -p \$dldir~
$install_prog $dir/$dlname \$dldir/$dlname'
postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
dlpath=$dir/\$dldll~
$RM \$dlpath'
shlibpath_overrides_runpath=yes
dynamic_linker='Win32 link.exe'
;;
*)
# Assume MSVC wrapper
library_names_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext $libname.lib'
dynamic_linker='Win32 ld.exe'
;;
esac
# FIXME: first we should search . and the directory the executable is in
shlibpath_var=PATH
;;
darwin* | rhapsody*)
dynamic_linker="$host_os dyld"
version_type=darwin
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$major$shared_ext $libname$shared_ext'
soname_spec='$libname$release$major$shared_ext'
shlibpath_overrides_runpath=yes
shlibpath_var=DYLD_LIBRARY_PATH
shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`'
m4_if([$1], [],[
sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"])
sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
;;
dgux*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
;;
freebsd* | dragonfly*)
# DragonFly does not have aout. When/if they implement a new
# versioning mechanism, adjust this.
if test -x /usr/bin/objformat; then
objformat=`/usr/bin/objformat`
else
case $host_os in
freebsd[[23]].*) objformat=aout ;;
*) objformat=elf ;;
esac
fi
version_type=freebsd-$objformat
case $version_type in
freebsd-elf*)
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
need_version=no
need_lib_prefix=no
;;
freebsd-*)
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
need_version=yes
;;
esac
shlibpath_var=LD_LIBRARY_PATH
case $host_os in
freebsd2.*)
shlibpath_overrides_runpath=yes
;;
freebsd3.[[01]]* | freebsdelf3.[[01]]*)
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
;;
freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \
freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1)
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
;;
*) # from 4.6 on, and DragonFly
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
;;
esac
;;
haiku*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
dynamic_linker="$host_os runtime_loader"
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LIBRARY_PATH
shlibpath_overrides_runpath=no
sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
hardcode_into_libs=yes
;;
hpux9* | hpux10* | hpux11*)
# Give a soname corresponding to the major version so that dld.sl refuses to
# link against other versions.
version_type=sunos
need_lib_prefix=no
need_version=no
case $host_cpu in
ia64*)
shrext_cmds='.so'
hardcode_into_libs=yes
dynamic_linker="$host_os dld.so"
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
if test 32 = "$HPUX_IA64_MODE"; then
sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
sys_lib_dlsearch_path_spec=/usr/lib/hpux32
else
sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
sys_lib_dlsearch_path_spec=/usr/lib/hpux64
fi
;;
hppa*64*)
shrext_cmds='.sl'
hardcode_into_libs=yes
dynamic_linker="$host_os dld.sl"
shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
;;
*)
shrext_cmds='.sl'
dynamic_linker="$host_os dld.sl"
shlibpath_var=SHLIB_PATH
shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
;;
esac
# HP-UX runs *really* slowly unless shared libraries are mode 555, ...
postinstall_cmds='chmod 555 $lib'
# or fails outright, so override atomically:
install_override_mode=555
;;
interix[[3-9]]*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
;;
irix5* | irix6* | nonstopux*)
case $host_os in
nonstopux*) version_type=nonstopux ;;
*)
if test yes = "$lt_cv_prog_gnu_ld"; then
version_type=linux # correct to gnu/linux during the next big refactor
else
version_type=irix
fi ;;
esac
need_lib_prefix=no
need_version=no
soname_spec='$libname$release$shared_ext$major'
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext'
case $host_os in
irix5* | nonstopux*)
libsuff= shlibsuff=
;;
*)
case $LD in # libtool.m4 will add one of these switches to LD
*-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
libsuff= shlibsuff= libmagic=32-bit;;
*-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
libsuff=32 shlibsuff=N32 libmagic=N32;;
*-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
libsuff=64 shlibsuff=64 libmagic=64-bit;;
*) libsuff= shlibsuff= libmagic=never-match;;
esac
;;
esac
shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
shlibpath_overrides_runpath=no
sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff"
sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff"
hardcode_into_libs=yes
;;
# No shared lib support for Linux oldld, aout, or coff.
linux*oldld* | linux*aout* | linux*coff*)
dynamic_linker=no
;;
linux*android*)
version_type=none # Android doesn't support versioned libraries.
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext'
soname_spec='$libname$release$shared_ext'
finish_cmds=
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
# This implies no fast_install, which is unacceptable.
# Some rework will be needed to allow for fast_install
# before this can be enabled.
hardcode_into_libs=yes
dynamic_linker='Android linker'
# Don't embed -rpath directories since the linker doesn't support them.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
;;
# This must be glibc/ELF.
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
# Some binutils ld are patched to set DT_RUNPATH
AC_CACHE_VAL([lt_cv_shlibpath_overrides_runpath],
[lt_cv_shlibpath_overrides_runpath=no
save_LDFLAGS=$LDFLAGS
save_libdir=$libdir
eval "libdir=/foo; wl=\"$_LT_TAGVAR(lt_prog_compiler_wl, $1)\"; \
LDFLAGS=\"\$LDFLAGS $_LT_TAGVAR(hardcode_libdir_flag_spec, $1)\""
AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])],
[AS_IF([ ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null],
[lt_cv_shlibpath_overrides_runpath=yes])])
LDFLAGS=$save_LDFLAGS
libdir=$save_libdir
])
shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath
# This implies no fast_install, which is unacceptable.
# Some rework will be needed to allow for fast_install
# before this can be enabled.
hardcode_into_libs=yes
# Ideally, we could use ldconfig to report *all* directores which are
# searched for libraries, however this is still not possible. Aside from not
# being certain /sbin/ldconfig is available, command
# 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64,
# even though it is searched at run-time. Try to do the best guess by
# appending ld.so.conf contents (and includes) to the search path.
if test -f /etc/ld.so.conf; then
lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '`
sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra"
fi
# We used to test for /lib/ld.so.1 and disable shared libraries on
# powerpc, because MkLinux only supported shared libraries with the
# GNU dynamic linker. Since this was broken with cross compilers,
# most powerpc-linux boxes support dynamic linking these days and
# people can always --disable-shared, the test was removed, and we
# assume the GNU/Linux dynamic linker is in use.
dynamic_linker='GNU/Linux ld.so'
;;
netbsdelf*-gnu)
version_type=linux
need_lib_prefix=no
need_version=no
library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
soname_spec='${libname}${release}${shared_ext}$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
dynamic_linker='NetBSD ld.elf_so'
;;
netbsd*)
version_type=sunos
need_lib_prefix=no
need_version=no
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
dynamic_linker='NetBSD (a.out) ld.so'
else
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
dynamic_linker='NetBSD ld.elf_so'
fi
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
;;
newsos6)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
;;
*nto* | *qnx*)
version_type=qnx
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
dynamic_linker='ldqnx.so'
;;
openbsd* | bitrig*)
version_type=sunos
sys_lib_dlsearch_path_spec=/usr/lib
need_lib_prefix=no
if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
need_version=no
else
need_version=yes
fi
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
;;
os2*)
libname_spec='$name'
version_type=windows
shrext_cmds=.dll
need_version=no
need_lib_prefix=no
# OS/2 can only load a DLL with a base name of 8 characters or less.
soname_spec='`test -n "$os2dllname" && libname="$os2dllname";
v=$($ECHO $release$versuffix | tr -d .-);
n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _);
$ECHO $n$v`$shared_ext'
library_names_spec='${libname}_dll.$libext'
dynamic_linker='OS/2 ld.exe'
shlibpath_var=BEGINLIBPATH
sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
postinstall_cmds='base_file=`basename \$file`~
dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~
dldir=$destdir/`dirname \$dlpath`~
test -d \$dldir || mkdir -p \$dldir~
$install_prog $dir/$dlname \$dldir/$dlname~
chmod a+x \$dldir/$dlname~
if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then
eval '\''$striplib \$dldir/$dlname'\'' || exit \$?;
fi'
postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~
dlpath=$dir/\$dldll~
$RM \$dlpath'
;;
osf3* | osf4* | osf5*)
version_type=osf
need_lib_prefix=no
need_version=no
soname_spec='$libname$release$shared_ext$major'
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
;;
rdos*)
dynamic_linker=no
;;
solaris*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
# ldd complains unless libraries are executable
postinstall_cmds='chmod +x $lib'
;;
sunos4*)
version_type=sunos
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
if test yes = "$with_gnu_ld"; then
need_lib_prefix=no
fi
need_version=yes
;;
sysv4 | sysv4.3*)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
case $host_vendor in
sni)
shlibpath_overrides_runpath=no
need_lib_prefix=no
runpath_var=LD_RUN_PATH
;;
siemens)
need_lib_prefix=no
;;
motorola)
need_lib_prefix=no
need_version=no
shlibpath_overrides_runpath=no
sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
;;
esac
;;
sysv4*MP*)
if test -d /usr/nec; then
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext'
soname_spec='$libname$shared_ext.$major'
shlibpath_var=LD_LIBRARY_PATH
fi
;;
sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*)
version_type=sco
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
if test yes = "$with_gnu_ld"; then
sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib'
else
sys_lib_search_path_spec='/usr/ccs/lib /usr/lib'
case $host_os in
sco3.2v5*)
sys_lib_search_path_spec="$sys_lib_search_path_spec /lib"
;;
esac
fi
sys_lib_dlsearch_path_spec='/usr/lib'
;;
tpf*)
# TPF is a cross-target only. Preferred cross-host = GNU/Linux.
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
;;
uts4*)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
;;
*)
dynamic_linker=no
;;
esac
AC_MSG_RESULT([$dynamic_linker])
test no = "$dynamic_linker" && can_build_shared=no
variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
if test yes = "$GCC"; then
variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
fi
if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then
sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec
fi
if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then
sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec
fi
# remember unaugmented sys_lib_dlsearch_path content for libtool script decls...
configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec
# ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code
func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH"
# to be used as default LT_SYS_LIBRARY_PATH value in generated libtool
configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH
_LT_DECL([], [variables_saved_for_relink], [1],
[Variables whose values should be saved in libtool wrapper scripts and
restored at link time])
_LT_DECL([], [need_lib_prefix], [0],
[Do we need the "lib" prefix for modules?])
_LT_DECL([], [need_version], [0], [Do we need a version for libraries?])
_LT_DECL([], [version_type], [0], [Library versioning type])
_LT_DECL([], [runpath_var], [0], [Shared library runtime path variable])
_LT_DECL([], [shlibpath_var], [0],[Shared library path variable])
_LT_DECL([], [shlibpath_overrides_runpath], [0],
[Is shlibpath searched before the hard-coded library search path?])
_LT_DECL([], [libname_spec], [1], [Format of library name prefix])
_LT_DECL([], [library_names_spec], [1],
[[List of archive names. First name is the real one, the rest are links.
The last name is the one that the linker finds with -lNAME]])
_LT_DECL([], [soname_spec], [1],
[[The coded name of the library, if different from the real name]])
_LT_DECL([], [install_override_mode], [1],
[Permission mode override for installation of shared libraries])
_LT_DECL([], [postinstall_cmds], [2],
[Command to use after installation of a shared archive])
_LT_DECL([], [postuninstall_cmds], [2],
[Command to use after uninstallation of a shared archive])
_LT_DECL([], [finish_cmds], [2],
[Commands used to finish a libtool library installation in a directory])
_LT_DECL([], [finish_eval], [1],
[[As "finish_cmds", except a single script fragment to be evaled but
not shown]])
_LT_DECL([], [hardcode_into_libs], [0],
[Whether we should hardcode library paths into libraries])
_LT_DECL([], [sys_lib_search_path_spec], [2],
[Compile-time system search path for libraries])
_LT_DECL([sys_lib_dlsearch_path_spec], [configure_time_dlsearch_path], [2],
[Detected run-time system search path for libraries])
_LT_DECL([], [configure_time_lt_sys_library_path], [2],
[Explicit LT_SYS_LIBRARY_PATH set during ./configure time])
])# _LT_SYS_DYNAMIC_LINKER
# _LT_PATH_TOOL_PREFIX(TOOL)
# --------------------------
# find a file program that can recognize shared library
AC_DEFUN([_LT_PATH_TOOL_PREFIX],
[m4_require([_LT_DECL_EGREP])dnl
AC_MSG_CHECKING([for $1])
AC_CACHE_VAL(lt_cv_path_MAGIC_CMD,
[case $MAGIC_CMD in
[[\\/*] | ?:[\\/]*])
lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path.
;;
*)
lt_save_MAGIC_CMD=$MAGIC_CMD
lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
dnl $ac_dummy forces splitting on constant user-supplied paths.
dnl POSIX.2 word splitting is done only on the output of word expansions,
dnl not every word. This closes a longstanding sh security hole.
ac_dummy="m4_if([$2], , $PATH, [$2])"
for ac_dir in $ac_dummy; do
IFS=$lt_save_ifs
test -z "$ac_dir" && ac_dir=.
if test -f "$ac_dir/$1"; then
lt_cv_path_MAGIC_CMD=$ac_dir/"$1"
if test -n "$file_magic_test_file"; then
case $deplibs_check_method in
"file_magic "*)
file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"`
MAGIC_CMD=$lt_cv_path_MAGIC_CMD
if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
$EGREP "$file_magic_regex" > /dev/null; then
:
else
cat <<_LT_EOF 1>&2
*** Warning: the command libtool uses to detect shared libraries,
*** $file_magic_cmd, produces output that libtool cannot recognize.
*** The result is that libtool may fail to recognize shared libraries
*** as such. This will affect the creation of libtool libraries that
*** depend on shared libraries, but programs linked with such libtool
*** libraries will work regardless of this problem. Nevertheless, you
*** may want to report the problem to your system manager and/or to
*** bug-libtool@gnu.org
_LT_EOF
fi ;;
esac
fi
break
fi
done
IFS=$lt_save_ifs
MAGIC_CMD=$lt_save_MAGIC_CMD
;;
esac])
MAGIC_CMD=$lt_cv_path_MAGIC_CMD
if test -n "$MAGIC_CMD"; then
AC_MSG_RESULT($MAGIC_CMD)
else
AC_MSG_RESULT(no)
fi
_LT_DECL([], [MAGIC_CMD], [0],
[Used to examine libraries when file_magic_cmd begins with "file"])dnl
])# _LT_PATH_TOOL_PREFIX
# Old name:
AU_ALIAS([AC_PATH_TOOL_PREFIX], [_LT_PATH_TOOL_PREFIX])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_PATH_TOOL_PREFIX], [])
# _LT_PATH_MAGIC
# --------------
# find a file program that can recognize a shared library
m4_defun([_LT_PATH_MAGIC],
[_LT_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH)
if test -z "$lt_cv_path_MAGIC_CMD"; then
if test -n "$ac_tool_prefix"; then
_LT_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH)
else
MAGIC_CMD=:
fi
fi
])# _LT_PATH_MAGIC
# LT_PATH_LD
# ----------
# find the pathname to the GNU or non-GNU linker
AC_DEFUN([LT_PATH_LD],
[AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_PROG_ECHO_BACKSLASH])dnl
AC_ARG_WITH([gnu-ld],
[AS_HELP_STRING([--with-gnu-ld],
[assume the C compiler uses GNU ld @<:@default=no@:>@])],
[test no = "$withval" || with_gnu_ld=yes],
[with_gnu_ld=no])dnl
ac_prog=ld
if test yes = "$GCC"; then
# Check if gcc -print-prog-name=ld gives a path.
AC_MSG_CHECKING([for ld used by $CC])
case $host in
*-*-mingw*)
# gcc leaves a trailing carriage return, which upsets mingw
ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;;
*)
ac_prog=`($CC -print-prog-name=ld) 2>&5` ;;
esac
case $ac_prog in
# Accept absolute paths.
[[\\/]]* | ?:[[\\/]]*)
re_direlt='/[[^/]][[^/]]*/\.\./'
# Canonicalize the pathname of ld
ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'`
while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do
ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"`
done
test -z "$LD" && LD=$ac_prog
;;
"")
# If it fails, then pretend we aren't using GCC.
ac_prog=ld
;;
*)
# If it is relative, then search for the first ld in PATH.
with_gnu_ld=unknown
;;
esac
elif test yes = "$with_gnu_ld"; then
AC_MSG_CHECKING([for GNU ld])
else
AC_MSG_CHECKING([for non-GNU ld])
fi
AC_CACHE_VAL(lt_cv_path_LD,
[if test -z "$LD"; then
lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
for ac_dir in $PATH; do
IFS=$lt_save_ifs
test -z "$ac_dir" && ac_dir=.
if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
lt_cv_path_LD=$ac_dir/$ac_prog
# Check to see if the program is GNU ld. I'd rather use --version,
# but apparently some variants of GNU ld only accept -v.
# Break only if it was the GNU/non-GNU ld that we prefer.
case `"$lt_cv_path_LD" -v 2>&1 &1 conftest.i
cat conftest.i conftest.i >conftest2.i
: ${lt_DD:=$DD}
AC_PATH_PROGS_FEATURE_CHECK([lt_DD], [dd],
[if "$ac_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then
cmp -s conftest.i conftest.out \
&& ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=:
fi])
rm -f conftest.i conftest2.i conftest.out])
])# _LT_PATH_DD
# _LT_CMD_TRUNCATE
# ----------------
# find command to truncate a binary pipe
m4_defun([_LT_CMD_TRUNCATE],
[m4_require([_LT_PATH_DD])
AC_CACHE_CHECK([how to truncate binary pipes], [lt_cv_truncate_bin],
[printf 0123456789abcdef0123456789abcdef >conftest.i
cat conftest.i conftest.i >conftest2.i
lt_cv_truncate_bin=
if "$ac_cv_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then
cmp -s conftest.i conftest.out \
&& lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1"
fi
rm -f conftest.i conftest2.i conftest.out
test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q"])
_LT_DECL([lt_truncate_bin], [lt_cv_truncate_bin], [1],
[Command to truncate a binary pipe])
])# _LT_CMD_TRUNCATE
# _LT_CHECK_MAGIC_METHOD
# ----------------------
# how to check for library dependencies
# -- PORTME fill in with the dynamic library characteristics
m4_defun([_LT_CHECK_MAGIC_METHOD],
[m4_require([_LT_DECL_EGREP])
m4_require([_LT_DECL_OBJDUMP])
AC_CACHE_CHECK([how to recognize dependent libraries],
lt_cv_deplibs_check_method,
[lt_cv_file_magic_cmd='$MAGIC_CMD'
lt_cv_file_magic_test_file=
lt_cv_deplibs_check_method='unknown'
# Need to set the preceding variable on all platforms that support
# interlibrary dependencies.
# 'none' -- dependencies not supported.
# 'unknown' -- same as none, but documents that we really don't know.
# 'pass_all' -- all dependencies passed with no checks.
# 'test_compile' -- check by making test program.
# 'file_magic [[regex]]' -- check by looking for files in library path
# that responds to the $file_magic_cmd with a given extended regex.
# If you have 'file' or equivalent on your system and you're not sure
# whether 'pass_all' will *always* work, you probably want this one.
case $host_os in
aix[[4-9]]*)
lt_cv_deplibs_check_method=pass_all
;;
beos*)
lt_cv_deplibs_check_method=pass_all
;;
bsdi[[45]]*)
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib)'
lt_cv_file_magic_cmd='/usr/bin/file -L'
lt_cv_file_magic_test_file=/shlib/libc.so
;;
cygwin*)
# func_win32_libid is a shell function defined in ltmain.sh
lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
lt_cv_file_magic_cmd='func_win32_libid'
;;
mingw* | pw32*)
# Base MSYS/MinGW do not provide the 'file' command needed by
# func_win32_libid shell function, so use a weaker test based on 'objdump',
# unless we find 'file', for example because we are cross-compiling.
if ( file / ) >/dev/null 2>&1; then
lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
lt_cv_file_magic_cmd='func_win32_libid'
else
# Keep this pattern in sync with the one in func_win32_libid.
lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
lt_cv_file_magic_cmd='$OBJDUMP -f'
fi
;;
cegcc*)
# use the weaker test based on 'objdump'. See mingw*.
lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?'
lt_cv_file_magic_cmd='$OBJDUMP -f'
;;
darwin* | rhapsody*)
lt_cv_deplibs_check_method=pass_all
;;
freebsd* | dragonfly*)
if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
case $host_cpu in
i*86 )
# Not sure whether the presence of OpenBSD here was a mistake.
# Let's accept both of them until this is cleared up.
lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library'
lt_cv_file_magic_cmd=/usr/bin/file
lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
;;
esac
else
lt_cv_deplibs_check_method=pass_all
fi
;;
haiku*)
lt_cv_deplibs_check_method=pass_all
;;
hpux10.20* | hpux11*)
lt_cv_file_magic_cmd=/usr/bin/file
case $host_cpu in
ia64*)
lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64'
lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so
;;
hppa*64*)
[lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]']
lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl
;;
*)
lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]]\.[[0-9]]) shared library'
lt_cv_file_magic_test_file=/usr/lib/libc.sl
;;
esac
;;
interix[[3-9]]*)
# PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$'
;;
irix5* | irix6* | nonstopux*)
case $LD in
*-32|*"-32 ") libmagic=32-bit;;
*-n32|*"-n32 ") libmagic=N32;;
*-64|*"-64 ") libmagic=64-bit;;
*) libmagic=never-match;;
esac
lt_cv_deplibs_check_method=pass_all
;;
# This must be glibc/ELF.
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
lt_cv_deplibs_check_method=pass_all
;;
netbsd* | netbsdelf*-gnu)
if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$'
else
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$'
fi
;;
newos6*)
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)'
lt_cv_file_magic_cmd=/usr/bin/file
lt_cv_file_magic_test_file=/usr/lib/libnls.so
;;
*nto* | *qnx*)
lt_cv_deplibs_check_method=pass_all
;;
openbsd* | bitrig*)
if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$'
else
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$'
fi
;;
osf3* | osf4* | osf5*)
lt_cv_deplibs_check_method=pass_all
;;
rdos*)
lt_cv_deplibs_check_method=pass_all
;;
solaris*)
lt_cv_deplibs_check_method=pass_all
;;
sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*)
lt_cv_deplibs_check_method=pass_all
;;
sysv4 | sysv4.3*)
case $host_vendor in
motorola)
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]'
lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*`
;;
ncr)
lt_cv_deplibs_check_method=pass_all
;;
sequent)
lt_cv_file_magic_cmd='/bin/file'
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )'
;;
sni)
lt_cv_file_magic_cmd='/bin/file'
lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib"
lt_cv_file_magic_test_file=/lib/libc.so
;;
siemens)
lt_cv_deplibs_check_method=pass_all
;;
pc)
lt_cv_deplibs_check_method=pass_all
;;
esac
;;
tpf*)
lt_cv_deplibs_check_method=pass_all
;;
os2*)
lt_cv_deplibs_check_method=pass_all
;;
esac
])
file_magic_glob=
want_nocaseglob=no
if test "$build" = "$host"; then
case $host_os in
mingw* | pw32*)
if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
want_nocaseglob=yes
else
file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"`
fi
;;
esac
fi
file_magic_cmd=$lt_cv_file_magic_cmd
deplibs_check_method=$lt_cv_deplibs_check_method
test -z "$deplibs_check_method" && deplibs_check_method=unknown
_LT_DECL([], [deplibs_check_method], [1],
[Method to check whether dependent libraries are shared objects])
_LT_DECL([], [file_magic_cmd], [1],
[Command to use when deplibs_check_method = "file_magic"])
_LT_DECL([], [file_magic_glob], [1],
[How to find potential files when deplibs_check_method = "file_magic"])
_LT_DECL([], [want_nocaseglob], [1],
[Find potential files using nocaseglob when deplibs_check_method = "file_magic"])
])# _LT_CHECK_MAGIC_METHOD
# LT_PATH_NM
# ----------
# find the pathname to a BSD- or MS-compatible name lister
AC_DEFUN([LT_PATH_NM],
[AC_REQUIRE([AC_PROG_CC])dnl
AC_CACHE_CHECK([for BSD- or MS-compatible name lister (nm)], lt_cv_path_NM,
[if test -n "$NM"; then
# Let the user override the test.
lt_cv_path_NM=$NM
else
lt_nm_to_check=${ac_tool_prefix}nm
if test -n "$ac_tool_prefix" && test "$build" = "$host"; then
lt_nm_to_check="$lt_nm_to_check nm"
fi
for lt_tmp_nm in $lt_nm_to_check; do
lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do
IFS=$lt_save_ifs
test -z "$ac_dir" && ac_dir=.
tmp_nm=$ac_dir/$lt_tmp_nm
if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then
# Check to see if the nm accepts a BSD-compat flag.
# Adding the 'sed 1q' prevents false positives on HP-UX, which says:
# nm: unknown option "B" ignored
# Tru64's nm complains that /dev/null is an invalid object file
# MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty
case $build_os in
mingw*) lt_bad_file=conftest.nm/nofile ;;
*) lt_bad_file=/dev/null ;;
esac
case `"$tmp_nm" -B $lt_bad_file 2>&1 | sed '1q'` in
*$lt_bad_file* | *'Invalid file or object type'*)
lt_cv_path_NM="$tmp_nm -B"
break 2
;;
*)
case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in
*/dev/null*)
lt_cv_path_NM="$tmp_nm -p"
break 2
;;
*)
lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
continue # so that we can try to find one that supports BSD flags
;;
esac
;;
esac
fi
done
IFS=$lt_save_ifs
done
: ${lt_cv_path_NM=no}
fi])
if test no != "$lt_cv_path_NM"; then
NM=$lt_cv_path_NM
else
# Didn't find any BSD compatible name lister, look for dumpbin.
if test -n "$DUMPBIN"; then :
# Let the user override the test.
else
AC_CHECK_TOOLS(DUMPBIN, [dumpbin "link -dump"], :)
case `$DUMPBIN -symbols -headers /dev/null 2>&1 | sed '1q'` in
*COFF*)
DUMPBIN="$DUMPBIN -symbols -headers"
;;
*)
DUMPBIN=:
;;
esac
fi
AC_SUBST([DUMPBIN])
if test : != "$DUMPBIN"; then
NM=$DUMPBIN
fi
fi
test -z "$NM" && NM=nm
AC_SUBST([NM])
_LT_DECL([], [NM], [1], [A BSD- or MS-compatible name lister])dnl
AC_CACHE_CHECK([the name lister ($NM) interface], [lt_cv_nm_interface],
[lt_cv_nm_interface="BSD nm"
echo "int some_variable = 0;" > conftest.$ac_ext
(eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&AS_MESSAGE_LOG_FD)
(eval "$ac_compile" 2>conftest.err)
cat conftest.err >&AS_MESSAGE_LOG_FD
(eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&AS_MESSAGE_LOG_FD)
(eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out)
cat conftest.err >&AS_MESSAGE_LOG_FD
(eval echo "\"\$as_me:$LINENO: output\"" >&AS_MESSAGE_LOG_FD)
cat conftest.out >&AS_MESSAGE_LOG_FD
if $GREP 'External.*some_variable' conftest.out > /dev/null; then
lt_cv_nm_interface="MS dumpbin"
fi
rm -f conftest*])
])# LT_PATH_NM
# Old names:
AU_ALIAS([AM_PROG_NM], [LT_PATH_NM])
AU_ALIAS([AC_PROG_NM], [LT_PATH_NM])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AM_PROG_NM], [])
dnl AC_DEFUN([AC_PROG_NM], [])
# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
# --------------------------------
# how to determine the name of the shared library
# associated with a specific link library.
# -- PORTME fill in with the dynamic library characteristics
m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB],
[m4_require([_LT_DECL_EGREP])
m4_require([_LT_DECL_OBJDUMP])
m4_require([_LT_DECL_DLLTOOL])
AC_CACHE_CHECK([how to associate runtime and link libraries],
lt_cv_sharedlib_from_linklib_cmd,
[lt_cv_sharedlib_from_linklib_cmd='unknown'
case $host_os in
cygwin* | mingw* | pw32* | cegcc*)
# two different shell functions defined in ltmain.sh;
# decide which one to use based on capabilities of $DLLTOOL
case `$DLLTOOL --help 2>&1` in
*--identify-strict*)
lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
;;
*)
lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
;;
esac
;;
*)
# fallback: assume linklib IS sharedlib
lt_cv_sharedlib_from_linklib_cmd=$ECHO
;;
esac
])
sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
_LT_DECL([], [sharedlib_from_linklib_cmd], [1],
[Command to associate shared and link libraries])
])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
# _LT_PATH_MANIFEST_TOOL
# ----------------------
# locate the manifest tool
m4_defun([_LT_PATH_MANIFEST_TOOL],
[AC_CHECK_TOOL(MANIFEST_TOOL, mt, :)
test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool],
[lt_cv_path_mainfest_tool=no
echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD
$MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
cat conftest.err >&AS_MESSAGE_LOG_FD
if $GREP 'Manifest Tool' conftest.out > /dev/null; then
lt_cv_path_mainfest_tool=yes
fi
rm -f conftest*])
if test yes != "$lt_cv_path_mainfest_tool"; then
MANIFEST_TOOL=:
fi
_LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl
])# _LT_PATH_MANIFEST_TOOL
# _LT_DLL_DEF_P([FILE])
# ---------------------
# True iff FILE is a Windows DLL '.def' file.
# Keep in sync with func_dll_def_p in the libtool script
AC_DEFUN([_LT_DLL_DEF_P],
[dnl
test DEF = "`$SED -n dnl
-e '\''s/^[[ ]]*//'\'' dnl Strip leading whitespace
-e '\''/^\(;.*\)*$/d'\'' dnl Delete empty lines and comments
-e '\''s/^\(EXPORTS\|LIBRARY\)\([[ ]].*\)*$/DEF/p'\'' dnl
-e q dnl Only consider the first "real" line
$1`" dnl
])# _LT_DLL_DEF_P
# LT_LIB_M
# --------
# check for math library
AC_DEFUN([LT_LIB_M],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
LIBM=
case $host in
*-*-beos* | *-*-cegcc* | *-*-cygwin* | *-*-haiku* | *-*-pw32* | *-*-darwin*)
# These system don't have libm, or don't need it
;;
*-ncr-sysv4.3*)
AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM=-lmw)
AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm")
;;
*)
AC_CHECK_LIB(m, cos, LIBM=-lm)
;;
esac
AC_SUBST([LIBM])
])# LT_LIB_M
# Old name:
AU_ALIAS([AC_CHECK_LIBM], [LT_LIB_M])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_CHECK_LIBM], [])
# _LT_COMPILER_NO_RTTI([TAGNAME])
# -------------------------------
m4_defun([_LT_COMPILER_NO_RTTI],
[m4_require([_LT_TAG_COMPILER])dnl
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=
if test yes = "$GCC"; then
case $cc_basename in
nvcc*)
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -Xcompiler -fno-builtin' ;;
*)
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' ;;
esac
_LT_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions],
lt_cv_prog_compiler_rtti_exceptions,
[-fno-rtti -fno-exceptions], [],
[_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"])
fi
_LT_TAGDECL([no_builtin_flag], [lt_prog_compiler_no_builtin_flag], [1],
[Compiler flag to turn off builtin functions])
])# _LT_COMPILER_NO_RTTI
# _LT_CMD_GLOBAL_SYMBOLS
# ----------------------
m4_defun([_LT_CMD_GLOBAL_SYMBOLS],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([AC_PROG_AWK])dnl
AC_REQUIRE([LT_PATH_NM])dnl
AC_REQUIRE([LT_PATH_LD])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_TAG_COMPILER])dnl
# Check for command to grab the raw symbol name followed by C symbol from nm.
AC_MSG_CHECKING([command to parse $NM output from $compiler object])
AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe],
[
# These are sane defaults that work on at least a few old systems.
# [They come from Ultrix. What could be older than Ultrix?!! ;)]
# Character class describing NM global symbol codes.
symcode='[[BCDEGRST]]'
# Regexp to match symbols that can be accessed directly from C.
sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)'
# Define system-specific variables.
case $host_os in
aix*)
symcode='[[BCDT]]'
;;
cygwin* | mingw* | pw32* | cegcc*)
symcode='[[ABCDGISTW]]'
;;
hpux*)
if test ia64 = "$host_cpu"; then
symcode='[[ABCDEGRST]]'
fi
;;
irix* | nonstopux*)
symcode='[[BCDEGRST]]'
;;
osf*)
symcode='[[BCDEGQRST]]'
;;
solaris*)
symcode='[[BDRT]]'
;;
sco3.2v5*)
symcode='[[DT]]'
;;
sysv4.2uw2*)
symcode='[[DT]]'
;;
sysv5* | sco5v6* | unixware* | OpenUNIX*)
symcode='[[ABDT]]'
;;
sysv4)
symcode='[[DFNSTU]]'
;;
esac
# If we're using GNU nm, then use its standard symbol codes.
case `$NM -V 2>&1` in
*GNU* | *'with BFD'*)
symcode='[[ABCDGIRSTW]]' ;;
esac
if test "$lt_cv_nm_interface" = "MS dumpbin"; then
# Gets list of data symbols to import.
lt_cv_sys_global_symbol_to_import="sed -n -e 's/^I .* \(.*\)$/\1/p'"
# Adjust the below global symbol transforms to fixup imported variables.
lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'"
lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'"
lt_c_name_lib_hook="\
-e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\
-e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'"
else
# Disable hooks by default.
lt_cv_sys_global_symbol_to_import=
lt_cdecl_hook=
lt_c_name_hook=
lt_c_name_lib_hook=
fi
# Transform an extracted symbol line into a proper C declaration.
# Some systems (esp. on ia64) link data and code symbols differently,
# so use this general approach.
lt_cv_sys_global_symbol_to_cdecl="sed -n"\
$lt_cdecl_hook\
" -e 's/^T .* \(.*\)$/extern int \1();/p'"\
" -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'"
# Transform an extracted symbol line into symbol name and symbol address
lt_cv_sys_global_symbol_to_c_name_address="sed -n"\
$lt_c_name_hook\
" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\
" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'"
# Transform an extracted symbol line into symbol name with lib prefix and
# symbol address.
lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n"\
$lt_c_name_lib_hook\
" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\
" -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\
" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'"
# Handle CRLF in mingw tool chain
opt_cr=
case $build_os in
mingw*)
opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp
;;
esac
# Try without a prefix underscore, then with it.
for ac_symprfx in "" "_"; do
# Transform symcode, sympat, and symprfx into a raw symbol and a C symbol.
symxfrm="\\1 $ac_symprfx\\2 \\2"
# Write the raw and C identifiers.
if test "$lt_cv_nm_interface" = "MS dumpbin"; then
# Fake it for dumpbin and say T for any non-static function,
# D for any global variable and I for any imported variable.
# Also find C++ and __fastcall symbols from MSVC++,
# which start with @ or ?.
lt_cv_sys_global_symbol_pipe="$AWK ['"\
" {last_section=section; section=\$ 3};"\
" /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\
" /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\
" /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\
" /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\
" /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\
" \$ 0!~/External *\|/{next};"\
" / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\
" {if(hide[section]) next};"\
" {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\
" {split(\$ 0,a,/\||\r/); split(a[2],s)};"\
" s[1]~/^[@?]/{print f,s[1],s[1]; next};"\
" s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\
" ' prfx=^$ac_symprfx]"
else
lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
fi
lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
# Check to see that the pipe works correctly.
pipe_works=no
rm -f conftest*
cat > conftest.$ac_ext <<_LT_EOF
#ifdef __cplusplus
extern "C" {
#endif
char nm_test_var;
void nm_test_func(void);
void nm_test_func(void){}
#ifdef __cplusplus
}
#endif
int main(){nm_test_var='a';nm_test_func();return(0);}
_LT_EOF
if AC_TRY_EVAL(ac_compile); then
# Now try to grab the symbols.
nlist=conftest.nm
if AC_TRY_EVAL(NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) && test -s "$nlist"; then
# Try sorting and uniquifying the output.
if sort "$nlist" | uniq > "$nlist"T; then
mv -f "$nlist"T "$nlist"
else
rm -f "$nlist"T
fi
# Make sure that we snagged all the symbols we need.
if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
cat <<_LT_EOF > conftest.$ac_ext
/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */
#if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE
/* DATA imports from DLLs on WIN32 can't be const, because runtime
relocations are performed -- see ld's documentation on pseudo-relocs. */
# define LT@&t@_DLSYM_CONST
#elif defined __osf__
/* This system does not cope well with relocations in const data. */
# define LT@&t@_DLSYM_CONST
#else
# define LT@&t@_DLSYM_CONST const
#endif
#ifdef __cplusplus
extern "C" {
#endif
_LT_EOF
# Now generate the symbol file.
eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext'
cat <<_LT_EOF >> conftest.$ac_ext
/* The mapping between symbol names and symbols. */
LT@&t@_DLSYM_CONST struct {
const char *name;
void *address;
}
lt__PROGRAM__LTX_preloaded_symbols[[]] =
{
{ "@PROGRAM@", (void *) 0 },
_LT_EOF
$SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext
cat <<\_LT_EOF >> conftest.$ac_ext
{0, (void *) 0}
};
/* This works around a problem in FreeBSD linker */
#ifdef FREEBSD_WORKAROUND
static const void *lt_preloaded_setup() {
return lt__PROGRAM__LTX_preloaded_symbols;
}
#endif
#ifdef __cplusplus
}
#endif
_LT_EOF
# Now try linking the two files.
mv conftest.$ac_objext conftstm.$ac_objext
lt_globsym_save_LIBS=$LIBS
lt_globsym_save_CFLAGS=$CFLAGS
LIBS=conftstm.$ac_objext
CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
if AC_TRY_EVAL(ac_link) && test -s conftest$ac_exeext; then
pipe_works=yes
fi
LIBS=$lt_globsym_save_LIBS
CFLAGS=$lt_globsym_save_CFLAGS
else
echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
fi
else
echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD
fi
else
echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD
fi
else
echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD
cat conftest.$ac_ext >&5
fi
rm -rf conftest* conftst*
# Do not use the global_symbol_pipe unless it works.
if test yes = "$pipe_works"; then
break
else
lt_cv_sys_global_symbol_pipe=
fi
done
])
if test -z "$lt_cv_sys_global_symbol_pipe"; then
lt_cv_sys_global_symbol_to_cdecl=
fi
if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then
AC_MSG_RESULT(failed)
else
AC_MSG_RESULT(ok)
fi
# Response file support.
if test "$lt_cv_nm_interface" = "MS dumpbin"; then
nm_file_list_spec='@'
elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then
nm_file_list_spec='@'
fi
_LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1],
[Take the output of nm and produce a listing of raw symbols and C names])
_LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1],
[Transform the output of nm in a proper C declaration])
_LT_DECL([global_symbol_to_import], [lt_cv_sys_global_symbol_to_import], [1],
[Transform the output of nm into a list of symbols to manually relocate])
_LT_DECL([global_symbol_to_c_name_address],
[lt_cv_sys_global_symbol_to_c_name_address], [1],
[Transform the output of nm in a C name address pair])
_LT_DECL([global_symbol_to_c_name_address_lib_prefix],
[lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1],
[Transform the output of nm in a C name address pair when lib prefix is needed])
_LT_DECL([nm_interface], [lt_cv_nm_interface], [1],
[The name lister interface])
_LT_DECL([], [nm_file_list_spec], [1],
[Specify filename containing input files for $NM])
]) # _LT_CMD_GLOBAL_SYMBOLS
# _LT_COMPILER_PIC([TAGNAME])
# ---------------------------
m4_defun([_LT_COMPILER_PIC],
[m4_require([_LT_TAG_COMPILER])dnl
_LT_TAGVAR(lt_prog_compiler_wl, $1)=
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_static, $1)=
m4_if([$1], [CXX], [
# C++ specific cases for pic, static, wl, etc.
if test yes = "$GXX"; then
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
case $host_os in
aix*)
# All AIX code is PIC.
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
fi
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
m68k)
# FIXME: we need at least 68020 code to build shared libraries, but
# adding the '-m68020' flag to GCC prevents building anything better,
# like '-m68040'.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4'
;;
esac
;;
beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
# PIC is the default for these OSes.
;;
mingw* | cygwin* | os2* | pw32* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
# Although the cygwin gcc ignores -fPIC, still need this for old-style
# (--disable-auto-import) libraries
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
case $host_os in
os2*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static'
;;
esac
;;
darwin* | rhapsody*)
# PIC is the default on this platform
# Common symbols not allowed in MH_DYLIB files
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
;;
*djgpp*)
# DJGPP does not support shared libraries at all
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
;;
haiku*)
# PIC is the default for Haiku.
# The "-static" flag exists, but is broken.
_LT_TAGVAR(lt_prog_compiler_static, $1)=
;;
interix[[3-9]]*)
# Interix 3.x gcc -fpic/-fPIC options generate broken code.
# Instead, we relocate shared libraries at runtime.
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic
fi
;;
hpux*)
# PIC is the default for 64-bit PA HP-UX, but not for 32-bit
# PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag
# sets the default TLS model and affects inlining.
case $host_cpu in
hppa*64*)
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
;;
*qnx* | *nto*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
else
case $host_os in
aix[[4-9]]*)
# All AIX code is PIC.
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
else
_LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp'
fi
;;
chorus*)
case $cc_basename in
cxch68*)
# Green Hills C++ Compiler
# _LT_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a"
;;
esac
;;
mingw* | cygwin* | os2* | pw32* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
;;
dgux*)
case $cc_basename in
ec++*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
;;
ghcx*)
# Green Hills C++ Compiler
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
;;
*)
;;
esac
;;
freebsd* | dragonfly*)
# FreeBSD uses GNU C++
;;
hpux9* | hpux10* | hpux11*)
case $cc_basename in
CC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive'
if test ia64 != "$host_cpu"; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
fi
;;
aCC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive'
case $host_cpu in
hppa*64*|ia64*)
# +Z the default
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
;;
esac
;;
*)
;;
esac
;;
interix*)
# This is c89, which is MS Visual C++ (no shared libs)
# Anyone wants to do a port?
;;
irix5* | irix6* | nonstopux*)
case $cc_basename in
CC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
# CC pic flag -KPIC is the default.
;;
*)
;;
esac
;;
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
case $cc_basename in
KCC*)
# KAI C++ Compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
ecpc* )
# old Intel C++ for x86_64, which still supported -KPIC.
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
icpc* )
# Intel C++, used to be incompatible with GCC.
# ICC 10 doesn't accept -KPIC any more.
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
pgCC* | pgcpp*)
# Portland Group C++ compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
cxx*)
# Compaq C++
# Make sure the PIC flag is empty. It appears that all Alpha
# Linux and Compaq Tru64 Unix objects are PIC.
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
xlc* | xlC* | bgxl[[cC]]* | mpixl[[cC]]*)
# IBM XL 8.0, 9.0 on PPC and BlueGene
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink'
;;
*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*)
# Sun C++ 5.9
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
;;
esac
;;
esac
;;
lynxos*)
;;
m88k*)
;;
mvs*)
case $cc_basename in
cxx*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall'
;;
*)
;;
esac
;;
netbsd* | netbsdelf*-gnu)
;;
*qnx* | *nto*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
osf3* | osf4* | osf5*)
case $cc_basename in
KCC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,'
;;
RCC*)
# Rational C++ 2.4.1
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
;;
cxx*)
# Digital/Compaq C++
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# Make sure the PIC flag is empty. It appears that all Alpha
# Linux and Compaq Tru64 Unix objects are PIC.
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
*)
;;
esac
;;
psos*)
;;
solaris*)
case $cc_basename in
CC* | sunCC*)
# Sun C++ 4.2, 5.x and Centerline C++
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
;;
gcx*)
# Green Hills C++ Compiler
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
;;
*)
;;
esac
;;
sunos4*)
case $cc_basename in
CC*)
# Sun C++ 4.x
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
lcc*)
# Lucid
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
;;
*)
;;
esac
;;
sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*)
case $cc_basename in
CC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
esac
;;
tandem*)
case $cc_basename in
NCC*)
# NonStop-UX NCC 3.20
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
;;
*)
;;
esac
;;
vxworks*)
;;
*)
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
;;
esac
fi
],
[
if test yes = "$GCC"; then
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
case $host_os in
aix*)
# All AIX code is PIC.
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
fi
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
m68k)
# FIXME: we need at least 68020 code to build shared libraries, but
# adding the '-m68020' flag to GCC prevents building anything better,
# like '-m68040'.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4'
;;
esac
;;
beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
# PIC is the default for these OSes.
;;
mingw* | cygwin* | pw32* | os2* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
# Although the cygwin gcc ignores -fPIC, still need this for old-style
# (--disable-auto-import) libraries
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
case $host_os in
os2*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static'
;;
esac
;;
darwin* | rhapsody*)
# PIC is the default on this platform
# Common symbols not allowed in MH_DYLIB files
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
;;
haiku*)
# PIC is the default for Haiku.
# The "-static" flag exists, but is broken.
_LT_TAGVAR(lt_prog_compiler_static, $1)=
;;
hpux*)
# PIC is the default for 64-bit PA HP-UX, but not for 32-bit
# PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag
# sets the default TLS model and affects inlining.
case $host_cpu in
hppa*64*)
# +Z the default
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
;;
interix[[3-9]]*)
# Interix 3.x gcc -fpic/-fPIC options generate broken code.
# Instead, we relocate shared libraries at runtime.
;;
msdosdjgpp*)
# Just because we use GCC doesn't mean we suddenly get shared libraries
# on systems that don't support them.
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
enable_shared=no
;;
*nto* | *qnx*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic
fi
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
case $cc_basename in
nvcc*) # Cuda Compiler Driver 2.2
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Xlinker '
if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)="-Xcompiler $_LT_TAGVAR(lt_prog_compiler_pic, $1)"
fi
;;
esac
else
# PORTME Check for flag to pass linker flags through the system compiler.
case $host_os in
aix*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
else
_LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp'
fi
;;
darwin* | rhapsody*)
# PIC is the default on this platform
# Common symbols not allowed in MH_DYLIB files
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
case $cc_basename in
nagfor*)
# NAG Fortran compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
esac
;;
mingw* | cygwin* | pw32* | os2* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
case $host_os in
os2*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static'
;;
esac
;;
hpux9* | hpux10* | hpux11*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
# not for PA HP-UX.
case $host_cpu in
hppa*64*|ia64*)
# +Z the default
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
;;
esac
# Is there a better lt_prog_compiler_static that works with the bundled CC?
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive'
;;
irix5* | irix6* | nonstopux*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# PIC (with -KPIC) is the default.
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
case $cc_basename in
# old Intel for x86_64, which still supported -KPIC.
ecc*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
# icc used to be incompatible with GCC.
# ICC 10 doesn't accept -KPIC any more.
icc* | ifort*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
# Lahey Fortran 8.1.
lf95*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared'
_LT_TAGVAR(lt_prog_compiler_static, $1)='--static'
;;
nagfor*)
# NAG Fortran compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
tcc*)
# Fabrice Bellard et al's Tiny C Compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
# Portland Group compilers (*not* the Pentium gcc compiler,
# which looks to be a dead project)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
ccc*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# All Alpha code is PIC.
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
xl* | bgxl* | bgf* | mpixl*)
# IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink'
;;
*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [[1-7]].* | *Sun*Fortran*\ 8.[[0-3]]*)
# Sun Fortran 8.3 passes all unrecognized flags to the linker
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)=''
;;
*Sun\ F* | *Sun*Fortran*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
;;
*Sun\ C*)
# Sun C 5.9
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
;;
*Intel*\ [[CF]]*Compiler*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
*Portland\ Group*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
esac
;;
esac
;;
newsos6)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
*nto* | *qnx*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
osf3* | osf4* | osf5*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# All OSF/1 code is PIC.
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
rdos*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
solaris*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
case $cc_basename in
f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';;
*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';;
esac
;;
sunos4*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
sysv4 | sysv4.2uw2* | sysv4.3*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
fi
;;
sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
unicos*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
;;
uts4*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
*)
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
;;
esac
fi
])
case $host_os in
# For platforms that do not support PIC, -DPIC is meaningless:
*djgpp*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])"
;;
esac
AC_CACHE_CHECK([for $compiler option to produce PIC],
[_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)],
[_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
_LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)
#
# Check to make sure the PIC flag actually works.
#
if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then
_LT_COMPILER_OPTION([if $compiler PIC flag $_LT_TAGVAR(lt_prog_compiler_pic, $1) works],
[_LT_TAGVAR(lt_cv_prog_compiler_pic_works, $1)],
[$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])], [],
[case $_LT_TAGVAR(lt_prog_compiler_pic, $1) in
"" | " "*) ;;
*) _LT_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_TAGVAR(lt_prog_compiler_pic, $1)" ;;
esac],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no])
fi
_LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1],
[Additional compiler flags for building library objects])
_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
[How to pass a linker flag through the compiler])
#
# Check to make sure the static flag actually works.
#
wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_TAGVAR(lt_prog_compiler_static, $1)\"
_LT_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works],
_LT_TAGVAR(lt_cv_prog_compiler_static_works, $1),
$lt_tmp_static_flag,
[],
[_LT_TAGVAR(lt_prog_compiler_static, $1)=])
_LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1],
[Compiler flag to prevent dynamic linking])
])# _LT_COMPILER_PIC
# _LT_LINKER_SHLIBS([TAGNAME])
# ----------------------------
# See if the linker supports building shared libraries.
m4_defun([_LT_LINKER_SHLIBS],
[AC_REQUIRE([LT_PATH_LD])dnl
AC_REQUIRE([LT_PATH_NM])dnl
m4_require([_LT_PATH_MANIFEST_TOOL])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
m4_require([_LT_TAG_COMPILER])dnl
AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
m4_if([$1], [CXX], [
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
case $host_os in
aix[[4-9]]*)
# If we're using GNU nm, then we don't want the "-C" option.
# -C means demangle to GNU nm, but means don't demangle to AIX nm.
# Without the "-l" option, or with the "-B" option, AIX nm treats
# weak defined symbols like other global defined symbols, whereas
# GNU nm marks them as "W".
# While the 'weak' keyword is ignored in the Export File, we need
# it in the Import File for the 'aix-soname' feature, so we have
# to replace the "-B" option with "-P" for AIX nm.
if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then
_LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols'
else
_LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols'
fi
;;
pw32*)
_LT_TAGVAR(export_symbols_cmds, $1)=$ltdll_cmds
;;
cygwin* | mingw* | cegcc*)
case $cc_basename in
cl*)
_LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*'
;;
*)
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
;;
esac
;;
linux* | k*bsd*-gnu | gnu*)
_LT_TAGVAR(link_all_deplibs, $1)=no
;;
*)
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
;;
esac
], [
runpath_var=
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_cmds, $1)=
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(compiler_needs_object, $1)=no
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(old_archive_from_new_cmds, $1)=
_LT_TAGVAR(old_archive_from_expsyms_cmds, $1)=
_LT_TAGVAR(thread_safe_flag_spec, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
# include_expsyms should be a list of space-separated symbols to be *always*
# included in the symbol list
_LT_TAGVAR(include_expsyms, $1)=
# exclude_expsyms can be an extended regexp of symbols to exclude
# it will be wrapped by ' (' and ')$', so one must not match beginning or
# end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc',
# as well as any symbol that contains 'd'.
_LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
# Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
# platforms (ab)use it in PIC code, but their linkers get confused if
# the symbol is explicitly referenced. Since portable code cannot
# rely on this symbol name, it's probably fine to never include it in
# preloaded symbol tables.
# Exclude shared library initialization/finalization symbols.
dnl Note also adjust exclude_expsyms for C++ above.
extract_expsyms_cmds=
case $host_os in
cygwin* | mingw* | pw32* | cegcc*)
# FIXME: the MSVC++ port hasn't been tested in a loooong time
# When not using gcc, we currently assume that we are using
# Microsoft Visual C++.
if test yes != "$GCC"; then
with_gnu_ld=no
fi
;;
interix*)
# we just hope/assume this is gcc and not c89 (= MSVC++)
with_gnu_ld=yes
;;
openbsd* | bitrig*)
with_gnu_ld=no
;;
linux* | k*bsd*-gnu | gnu*)
_LT_TAGVAR(link_all_deplibs, $1)=no
;;
esac
_LT_TAGVAR(ld_shlibs, $1)=yes
# On some targets, GNU ld is compatible enough with the native linker
# that we're better off using the native interface for both.
lt_use_gnu_ld_interface=no
if test yes = "$with_gnu_ld"; then
case $host_os in
aix*)
# The AIX port of GNU ld has always aspired to compatibility
# with the native linker. However, as the warning in the GNU ld
# block says, versions before 2.19.5* couldn't really create working
# shared libraries, regardless of the interface used.
case `$LD -v 2>&1` in
*\ \(GNU\ Binutils\)\ 2.19.5*) ;;
*\ \(GNU\ Binutils\)\ 2.[[2-9]]*) ;;
*\ \(GNU\ Binutils\)\ [[3-9]]*) ;;
*)
lt_use_gnu_ld_interface=yes
;;
esac
;;
*)
lt_use_gnu_ld_interface=yes
;;
esac
fi
if test yes = "$lt_use_gnu_ld_interface"; then
# If archive_cmds runs LD, not CC, wlarc should be empty
wlarc='$wl'
# Set some defaults for GNU ld with shared library support. These
# are reset later if shared libraries are not supported. Putting them
# here allows them to be overridden if necessary.
runpath_var=LD_RUN_PATH
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
# ancient GNU ld didn't support --whole-archive et. al.
if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then
_LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
else
_LT_TAGVAR(whole_archive_flag_spec, $1)=
fi
supports_anon_versioning=no
case `$LD -v | $SED -e 's/([^)]\+)\s\+//' 2>&1` in
*GNU\ gold*) supports_anon_versioning=yes ;;
*\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11
*\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
*\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
*\ 2.11.*) ;; # other 2.11 versions
*) supports_anon_versioning=yes ;;
esac
# See if GNU ld supports shared libraries.
case $host_os in
aix[[3-9]]*)
# On AIX/PPC, the GNU linker is very broken
if test ia64 != "$host_cpu"; then
_LT_TAGVAR(ld_shlibs, $1)=no
cat <<_LT_EOF 1>&2
*** Warning: the GNU linker, at least up to release 2.19, is reported
*** to be unable to reliably create shared libraries on AIX.
*** Therefore, libtool is disabling shared libraries support. If you
*** really care for shared libraries, you may want to install binutils
*** 2.20 or above, or modify your PATH so that a non-GNU linker is found.
*** You will then need to restart the configuration process.
_LT_EOF
fi
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)=''
;;
m68k)
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
;;
esac
;;
beos*)
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
# Joseph Beckenbach says some releases of gcc
# support --undefined. This deserves some investigation. FIXME
_LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
cygwin* | mingw* | pw32* | cegcc*)
# _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
# as there is no search path for DLLs.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols'
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
# If the export-symbols file already is a .def file, use it as
# is; otherwise, prepend EXPORTS...
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp $export_symbols $output_objdir/$soname.def;
else
echo EXPORTS > $output_objdir/$soname.def;
cat $export_symbols >> $output_objdir/$soname.def;
fi~
$CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
haiku*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
os2*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
shrext_cmds=.dll
_LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
prefix_cmds="$SED"~
if test EXPORTS = "`$SED 1q $export_symbols`"; then
prefix_cmds="$prefix_cmds -e 1d";
fi~
prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
interix[[3-9]]*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc.
# Instead, shared libraries are loaded at an image base (0x10000000 by
# default) and relocated if they conflict, which is a slow very memory
# consuming and fragmenting process. To avoid this, we pick a random,
# 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link
# time. Moving up from 0x10000000 also allows more sbrk(2) space.
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
;;
gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu)
tmp_diet=no
if test linux-dietlibc = "$host_os"; then
case $cc_basename in
diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn)
esac
fi
if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \
&& test no = "$tmp_diet"
then
tmp_addflag=' $pic_flag'
tmp_sharedflag='-shared'
case $cc_basename,$host_cpu in
pgcc*) # Portland Group C compiler
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
tmp_addflag=' $pic_flag'
;;
pgf77* | pgf90* | pgf95* | pgfortran*)
# Portland Group f77 and f90 compilers
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
tmp_addflag=' $pic_flag -Mnomain' ;;
ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64
tmp_addflag=' -i_dynamic' ;;
efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64
tmp_addflag=' -i_dynamic -nofor_main' ;;
ifc* | ifort*) # Intel Fortran compiler
tmp_addflag=' -nofor_main' ;;
lf95*) # Lahey Fortran 8.1
_LT_TAGVAR(whole_archive_flag_spec, $1)=
tmp_sharedflag='--shared' ;;
nagfor*) # NAGFOR 5.3
tmp_sharedflag='-Wl,-shared' ;;
xl[[cC]]* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL C 8.0 on PPC (deal with xlf below)
tmp_sharedflag='-qmkshrobj'
tmp_addflag= ;;
nvcc*) # Cuda Compiler Driver 2.2
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
_LT_TAGVAR(compiler_needs_object, $1)=yes
;;
esac
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*) # Sun C 5.9
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
_LT_TAGVAR(compiler_needs_object, $1)=yes
tmp_sharedflag='-G' ;;
*Sun\ F*) # Sun Fortran 8.3
tmp_sharedflag='-G' ;;
esac
_LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
if test yes = "$supports_anon_versioning"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
echo "local: *; };" >> $output_objdir/$libname.ver~
$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib'
fi
case $cc_basename in
tcc*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)='-rdynamic'
;;
xlf* | bgf* | bgxlf* | mpixlf*)
# IBM XL Fortran 10.1 on PPC cannot create shared libs itself
_LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
if test yes = "$supports_anon_versioning"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
echo "local: *; };" >> $output_objdir/$libname.ver~
$LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
fi
;;
esac
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
netbsd* | netbsdelf*-gnu)
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
wlarc=
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
fi
;;
solaris*)
if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then
_LT_TAGVAR(ld_shlibs, $1)=no
cat <<_LT_EOF 1>&2
*** Warning: The releases 2.8.* of the GNU linker cannot reliably
*** create shared libraries on Solaris systems. Therefore, libtool
*** is disabling shared libraries support. We urge you to upgrade GNU
*** binutils to release 2.9.1 or newer. Another option is to modify
*** your PATH or compiler configuration so that the native linker is
*** used, and then restart.
_LT_EOF
elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*)
case `$LD -v 2>&1` in
*\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*)
_LT_TAGVAR(ld_shlibs, $1)=no
cat <<_LT_EOF 1>&2
*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot
*** reliably create shared libraries on SCO systems. Therefore, libtool
*** is disabling shared libraries support. We urge you to upgrade GNU
*** binutils to release 2.16.91.0.3 or newer. Another option is to modify
*** your PATH or compiler configuration so that the native linker is
*** used, and then restart.
_LT_EOF
;;
*)
# For security reasons, it is highly recommended that you always
# use absolute paths for naming shared libraries, and exclude the
# DT_RUNPATH tag from executables and libraries. But doing so
# requires that you compile everything twice, which is a pain.
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
sunos4*)
_LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
wlarc=
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
if test no = "$_LT_TAGVAR(ld_shlibs, $1)"; then
runpath_var=
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
fi
else
# PORTME fill in a description of your system's linker (not GNU ld)
case $host_os in
aix3*)
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=yes
_LT_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
# Note: this linker hardcodes the directories in LIBPATH if there
# are no directories specified by -L.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then
# Neither direct hardcoding nor static linking is supported with a
# broken collect2.
_LT_TAGVAR(hardcode_direct, $1)=unsupported
fi
;;
aix[[4-9]]*)
if test ia64 = "$host_cpu"; then
# On IA64, the linker does run time linking by default, so we don't
# have to do anything special.
aix_use_runtimelinking=no
exp_sym_flag='-Bexport'
no_entry_flag=
else
# If we're using GNU nm, then we don't want the "-C" option.
# -C means demangle to GNU nm, but means don't demangle to AIX nm.
# Without the "-l" option, or with the "-B" option, AIX nm treats
# weak defined symbols like other global defined symbols, whereas
# GNU nm marks them as "W".
# While the 'weak' keyword is ignored in the Export File, we need
# it in the Import File for the 'aix-soname' feature, so we have
# to replace the "-B" option with "-P" for AIX nm.
if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then
_LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols'
else
_LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols'
fi
aix_use_runtimelinking=no
# Test if we are trying to use run time linking or normal
# AIX style linking. If -brtl is somewhere in LDFLAGS, we
# have runtime linking enabled, and use it for executables.
# For shared libraries, we enable/disable runtime linking
# depending on the kind of the shared library created -
# when "with_aix_soname,aix_use_runtimelinking" is:
# "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables
# "aix,yes" lib.so shared, rtl:yes, for executables
# lib.a static archive
# "both,no" lib.so.V(shr.o) shared, rtl:yes
# lib.a(lib.so.V) shared, rtl:no, for executables
# "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a(lib.so.V) shared, rtl:no
# "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a static archive
case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*)
for ld_flag in $LDFLAGS; do
if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then
aix_use_runtimelinking=yes
break
fi
done
if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then
# With aix-soname=svr4, we create the lib.so.V shared archives only,
# so we don't have lib.a shared libs to link our executables.
# We have to force runtime linking in this case.
aix_use_runtimelinking=yes
LDFLAGS="$LDFLAGS -Wl,-brtl"
fi
;;
esac
exp_sym_flag='-bexport'
no_entry_flag='-bnoentry'
fi
# When large executables or shared objects are built, AIX ld can
# have problems creating the table of contents. If linking a library
# or program results in "error TOC overflow" add -mminimal-toc to
# CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
# enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
_LT_TAGVAR(archive_cmds, $1)=''
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='$wl-f,'
case $with_aix_soname,$aix_use_runtimelinking in
aix,*) ;; # traditional, no import file
svr4,* | *,yes) # use import file
# The Import File defines what to hardcode.
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
;;
esac
if test yes = "$GCC"; then
case $host_os in aix4.[[012]]|aix4.[[012]].*)
# We only want to do this on AIX 4.2 and lower, the check
# below for broken collect2 doesn't work under 4.3+
collect2name=`$CC -print-prog-name=collect2`
if test -f "$collect2name" &&
strings "$collect2name" | $GREP resolve_lib_name >/dev/null
then
# We have reworked collect2
:
else
# We have old collect2
_LT_TAGVAR(hardcode_direct, $1)=unsupported
# It fails to find uninstalled libraries when the uninstalled
# path is not listed in the libpath. Setting hardcode_minus_L
# to unsupported forces relinking
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=
fi
;;
esac
shared_flag='-shared'
if test yes = "$aix_use_runtimelinking"; then
shared_flag="$shared_flag "'$wl-G'
fi
# Need to ensure runtime linking is disabled for the traditional
# shared library, or the linker may eventually find shared libraries
# /with/ Import File - we do not want to mix them.
shared_flag_aix='-shared'
shared_flag_svr4='-shared $wl-G'
else
# not using gcc
if test ia64 = "$host_cpu"; then
# VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
# chokes on -Wl,-G. The following line is correct:
shared_flag='-G'
else
if test yes = "$aix_use_runtimelinking"; then
shared_flag='$wl-G'
else
shared_flag='$wl-bM:SRE'
fi
shared_flag_aix='$wl-bM:SRE'
shared_flag_svr4='$wl-G'
fi
fi
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall'
# It seems that -bexpall does not export symbols beginning with
# underscore (_), so it is better to generate a list of symbols to export.
_LT_TAGVAR(always_export_symbols, $1)=yes
if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then
# Warning - without using the other runtime loading flags (-brtl),
# -berok will link without error, but may produce a broken library.
_LT_TAGVAR(allow_undefined_flag, $1)='-berok'
# Determine the default libpath from the value encoded in an
# empty executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag
else
if test ia64 = "$host_cpu"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib'
_LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs"
_LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols"
else
# Determine the default libpath from the value encoded in an
# empty executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
# Warning - without using the other run time loading flags,
# -berok will link without error, but may produce a broken library.
_LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok'
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok'
if test yes = "$with_gnu_ld"; then
# We only use this code for GNU lds that support --whole-archive.
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive'
else
# Exported symbols can be pulled into shared objects from archives
_LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
_LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d'
# -brtl affects multiple linker settings, -berok does not and is overridden later
compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`'
if test svr4 != "$with_aix_soname"; then
# This is similar to how AIX traditionally builds its shared libraries.
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname'
fi
if test aix != "$with_aix_soname"; then
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp'
else
# used by -dlpreopen to get the symbols
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir'
fi
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d'
fi
fi
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)=''
;;
m68k)
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
;;
esac
;;
bsdi[[45]]*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic
;;
cygwin* | mingw* | pw32* | cegcc*)
# When not using gcc, we currently assume that we are using
# Microsoft Visual C++.
# hardcode_libdir_flag_spec is actually meaningless, as there is
# no search path for DLLs.
case $cc_basename in
cl*)
# Native MSVC
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='@'
# Tell ltmain to make .lib files, not .a files.
libext=lib
# Tell ltmain to make .dll files, not .so files.
shrext_cmds=.dll
# FIXME: Setting linknames here is a bad hack.
_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames='
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp "$export_symbols" "$output_objdir/$soname.def";
echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp";
else
$SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp;
fi~
$CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
linknames='
# The linker will not automatically build a static lib if we build a DLL.
# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
_LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*'
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
# Don't use ranlib
_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
lt_tool_outputfile="@TOOL_OUTPUT@"~
case $lt_outputfile in
*.exe|*.EXE) ;;
*)
lt_outputfile=$lt_outputfile.exe
lt_tool_outputfile=$lt_tool_outputfile.exe
;;
esac~
if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then
$MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
$RM "$lt_outputfile.manifest";
fi'
;;
*)
# Assume MSVC wrapper
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
# Tell ltmain to make .lib files, not .a files.
libext=lib
# Tell ltmain to make .dll files, not .so files.
shrext_cmds=.dll
# FIXME: Setting linknames here is a bad hack.
_LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
# The linker will automatically build a .lib file if we build a DLL.
_LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
# FIXME: Should let the user specify the lib program.
_LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
esac
;;
darwin* | rhapsody*)
_LT_DARWIN_LINKER_FEATURES($1)
;;
dgux*)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
# FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
# support. Future versions do this automatically, but an explicit c++rt0.o
# does not break anything, and helps significantly (at the cost of a little
# extra space).
freebsd2.2*)
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
# Unfortunately, older versions of FreeBSD 2 do not have this feature.
freebsd2.*)
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
# FreeBSD 3 and greater uses gcc -shared to do shared libraries.
freebsd* | dragonfly*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
hpux9*)
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
else
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(hardcode_direct, $1)=yes
# hardcode_minus_L: Not really in the search PATH,
# but as the default location of the library.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
;;
hpux10*)
if test yes,no = "$GCC,$with_gnu_ld"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
fi
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# hardcode_minus_L: Not really in the search PATH,
# but as the default location of the library.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
fi
;;
hpux11*)
if test yes,no = "$GCC,$with_gnu_ld"; then
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
else
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
;;
*)
m4_if($1, [], [
# Older versions of the 11.00 compiler do not understand -b yet
# (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does)
_LT_LINKER_OPTION([if $CC understands -b],
_LT_TAGVAR(lt_cv_prog_compiler__b, $1), [-b],
[_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'],
[_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'])],
[_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'])
;;
esac
fi
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
case $host_cpu in
hppa*64*|ia64*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# hardcode_minus_L: Not really in the search PATH,
# but as the default location of the library.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
;;
esac
fi
;;
irix5* | irix6* | nonstopux*)
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
# Try to use the -exported_symbol ld option, if it does not
# work, assume that -exports_file does not work either and
# implicitly export all symbols.
# This should be the same for all languages, so no per-tag cache variable.
AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol],
[lt_cv_irix_exported_symbol],
[save_LDFLAGS=$LDFLAGS
LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null"
AC_LINK_IFELSE(
[AC_LANG_SOURCE(
[AC_LANG_CASE([C], [[int foo (void) { return 0; }]],
[C++], [[int foo (void) { return 0; }]],
[Fortran 77], [[
subroutine foo
end]],
[Fortran], [[
subroutine foo
end]])])],
[lt_cv_irix_exported_symbol=yes],
[lt_cv_irix_exported_symbol=no])
LDFLAGS=$save_LDFLAGS])
if test yes = "$lt_cv_irix_exported_symbol"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib'
fi
_LT_TAGVAR(link_all_deplibs, $1)=no
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)='no'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(inherit_rpath, $1)=yes
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
linux*)
case $cc_basename in
tcc*)
# Fabrice Bellard et al's Tiny C Compiler
_LT_TAGVAR(ld_shlibs, $1)=yes
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
;;
netbsd* | netbsdelf*-gnu)
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
else
_LT_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
newsos6)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*nto* | *qnx*)
;;
openbsd* | bitrig*)
if test -f /usr/libexec/ld.so; then
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
fi
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
os2*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
shrext_cmds=.dll
_LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
prefix_cmds="$SED"~
if test EXPORTS = "`$SED 1q $export_symbols`"; then
prefix_cmds="$prefix_cmds -e 1d";
fi~
prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
osf3*)
if test yes = "$GCC"; then
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
else
_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)='no'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
;;
osf4* | osf5*) # as osf3* with the addition of -msym flag
if test yes = "$GCC"; then
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
else
_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~
$CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp'
# Both c and cxx compiler support -rpath directly
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)='no'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
;;
solaris*)
_LT_TAGVAR(no_undefined_flag, $1)=' -z defs'
if test yes = "$GCC"; then
wlarc='$wl'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
else
case `$CC -V 2>&1` in
*"Compilers 5.0"*)
wlarc=''
_LT_TAGVAR(archive_cmds, $1)='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp'
;;
*)
wlarc='$wl'
_LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
;;
esac
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
case $host_os in
solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
*)
# The compiler driver will combine and reorder linker options,
# but understands '-z linker_flag'. GCC discards it without '$wl',
# but is careful enough not to reorder.
# Supported since Solaris 2.6 (maybe 2.5.1?)
if test yes = "$GCC"; then
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract'
else
_LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract'
fi
;;
esac
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
sunos4*)
if test sequent = "$host_vendor"; then
# Use $CC to link under sequent, because it throws in some extra .o
# files that make .init and .fini sections work.
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
sysv4)
case $host_vendor in
sni)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=yes # is this really true???
;;
siemens)
## LD is ld it makes a PLAMLIB
## CC just makes a GrossModule.
_LT_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs'
_LT_TAGVAR(hardcode_direct, $1)=no
;;
motorola)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie
;;
esac
runpath_var='LD_RUN_PATH'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
sysv4.3*)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport'
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
runpath_var=LD_RUN_PATH
hardcode_runpath_var=yes
_LT_TAGVAR(ld_shlibs, $1)=yes
fi
;;
sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*)
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
runpath_var='LD_RUN_PATH'
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
fi
;;
sysv5* | sco3.2v5* | sco5v6*)
# Note: We CANNOT use -z defs as we might desire, because we do not
# link with -lc, and that would cause any symbols used from libc to
# always be unresolved, which means just about no library would
# ever link correctly. If we're not using GNU ld we use -z text
# though, which does catch some bad symbols but isn't as heavy-handed
# as -z defs.
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport'
runpath_var='LD_RUN_PATH'
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
fi
;;
uts4*)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
if test sni = "$host_vendor"; then
case $host in
sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Blargedynsym'
;;
esac
fi
fi
])
AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)])
test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no
_LT_TAGVAR(with_gnu_ld, $1)=$with_gnu_ld
_LT_DECL([], [libext], [0], [Old archive suffix (normally "a")])dnl
_LT_DECL([], [shrext_cmds], [1], [Shared library suffix (normally ".so")])dnl
_LT_DECL([], [extract_expsyms_cmds], [2],
[The commands to extract the exported symbol list from a shared archive])
#
# Do we need to explicitly link libc?
#
case "x$_LT_TAGVAR(archive_cmds_need_lc, $1)" in
x|xyes)
# Assume -lc should be added
_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
if test yes,yes = "$GCC,$enable_shared"; then
case $_LT_TAGVAR(archive_cmds, $1) in
*'~'*)
# FIXME: we may have to deal with multi-command sequences.
;;
'$CC '*)
# Test whether the compiler implicitly links with -lc since on some
# systems, -lgcc has to come before -lc. If gcc already passes -lc
# to ld, don't add -lc before -lgcc.
AC_CACHE_CHECK([whether -lc should be explicitly linked in],
[lt_cv_]_LT_TAGVAR(archive_cmds_need_lc, $1),
[$RM conftest*
echo "$lt_simple_compile_test_code" > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile) 2>conftest.err; then
soname=conftest
lib=conftest
libobjs=conftest.$ac_objext
deplibs=
wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1)
pic_flag=$_LT_TAGVAR(lt_prog_compiler_pic, $1)
compiler_flags=-v
linker_flags=-v
verstring=
output_objdir=.
libname=conftest
lt_save_allow_undefined_flag=$_LT_TAGVAR(allow_undefined_flag, $1)
_LT_TAGVAR(allow_undefined_flag, $1)=
if AC_TRY_EVAL(_LT_TAGVAR(archive_cmds, $1) 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1)
then
lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=no
else
lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
fi
_LT_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag
else
cat conftest.err 1>&5
fi
$RM conftest*
])
_LT_TAGVAR(archive_cmds_need_lc, $1)=$lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)
;;
esac
fi
;;
esac
_LT_TAGDECL([build_libtool_need_lc], [archive_cmds_need_lc], [0],
[Whether or not to add -lc for building shared libraries])
_LT_TAGDECL([allow_libtool_libs_with_static_runtimes],
[enable_shared_with_static_runtimes], [0],
[Whether or not to disallow shared libs when runtime libs are static])
_LT_TAGDECL([], [export_dynamic_flag_spec], [1],
[Compiler flag to allow reflexive dlopens])
_LT_TAGDECL([], [whole_archive_flag_spec], [1],
[Compiler flag to generate shared objects directly from archives])
_LT_TAGDECL([], [compiler_needs_object], [1],
[Whether the compiler copes with passing no objects directly])
_LT_TAGDECL([], [old_archive_from_new_cmds], [2],
[Create an old-style archive from a shared archive])
_LT_TAGDECL([], [old_archive_from_expsyms_cmds], [2],
[Create a temporary old-style archive to link instead of a shared archive])
_LT_TAGDECL([], [archive_cmds], [2], [Commands used to build a shared archive])
_LT_TAGDECL([], [archive_expsym_cmds], [2])
_LT_TAGDECL([], [module_cmds], [2],
[Commands used to build a loadable module if different from building
a shared archive.])
_LT_TAGDECL([], [module_expsym_cmds], [2])
_LT_TAGDECL([], [with_gnu_ld], [1],
[Whether we are building with GNU ld or not])
_LT_TAGDECL([], [allow_undefined_flag], [1],
[Flag that allows shared libraries with undefined symbols to be built])
_LT_TAGDECL([], [no_undefined_flag], [1],
[Flag that enforces no undefined symbols])
_LT_TAGDECL([], [hardcode_libdir_flag_spec], [1],
[Flag to hardcode $libdir into a binary during linking.
This must work even if $libdir does not exist])
_LT_TAGDECL([], [hardcode_libdir_separator], [1],
[Whether we need a single "-rpath" flag with a separated argument])
_LT_TAGDECL([], [hardcode_direct], [0],
[Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes
DIR into the resulting binary])
_LT_TAGDECL([], [hardcode_direct_absolute], [0],
[Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes
DIR into the resulting binary and the resulting library dependency is
"absolute", i.e impossible to change by setting $shlibpath_var if the
library is relocated])
_LT_TAGDECL([], [hardcode_minus_L], [0],
[Set to "yes" if using the -LDIR flag during linking hardcodes DIR
into the resulting binary])
_LT_TAGDECL([], [hardcode_shlibpath_var], [0],
[Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR
into the resulting binary])
_LT_TAGDECL([], [hardcode_automatic], [0],
[Set to "yes" if building a shared library automatically hardcodes DIR
into the library and all subsequent libraries and executables linked
against it])
_LT_TAGDECL([], [inherit_rpath], [0],
[Set to yes if linker adds runtime paths of dependent libraries
to runtime path list])
_LT_TAGDECL([], [link_all_deplibs], [0],
[Whether libtool must link a program against all its dependency libraries])
_LT_TAGDECL([], [always_export_symbols], [0],
[Set to "yes" if exported symbols are required])
_LT_TAGDECL([], [export_symbols_cmds], [2],
[The commands to list exported symbols])
_LT_TAGDECL([], [exclude_expsyms], [1],
[Symbols that should not be listed in the preloaded symbols])
_LT_TAGDECL([], [include_expsyms], [1],
[Symbols that must always be exported])
_LT_TAGDECL([], [prelink_cmds], [2],
[Commands necessary for linking programs (against libraries) with templates])
_LT_TAGDECL([], [postlink_cmds], [2],
[Commands necessary for finishing linking programs])
_LT_TAGDECL([], [file_list_spec], [1],
[Specify filename containing input files])
dnl FIXME: Not yet implemented
dnl _LT_TAGDECL([], [thread_safe_flag_spec], [1],
dnl [Compiler flag to generate thread safe objects])
])# _LT_LINKER_SHLIBS
# _LT_LANG_C_CONFIG([TAG])
# ------------------------
# Ensure that the configuration variables for a C compiler are suitably
# defined. These variables are subsequently used by _LT_CONFIG to write
# the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_C_CONFIG],
[m4_require([_LT_DECL_EGREP])dnl
lt_save_CC=$CC
AC_LANG_PUSH(C)
# Source file extension for C test sources.
ac_ext=c
# Object file extension for compiled C test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code="int some_variable = 0;"
# Code to be used in simple link tests
lt_simple_link_test_code='int main(){return(0);}'
_LT_TAG_COMPILER
# Save the default compiler, since it gets overwritten when the other
# tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP.
compiler_DEFAULT=$CC
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
if test -n "$compiler"; then
_LT_COMPILER_NO_RTTI($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
LT_SYS_DLOPEN_SELF
_LT_CMD_STRIPLIB
# Report what library types will actually be built
AC_MSG_CHECKING([if libtool supports shared libraries])
AC_MSG_RESULT([$can_build_shared])
AC_MSG_CHECKING([whether to build shared libraries])
test no = "$can_build_shared" && enable_shared=no
# On AIX, shared libraries and static libraries use the same namespace, and
# are all built from PIC.
case $host_os in
aix3*)
test yes = "$enable_shared" && enable_static=no
if test -n "$RANLIB"; then
archive_cmds="$archive_cmds~\$RANLIB \$lib"
postinstall_cmds='$RANLIB $lib'
fi
;;
aix[[4-9]]*)
if test ia64 != "$host_cpu"; then
case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
yes,aix,yes) ;; # shared object as lib.so file only
yes,svr4,*) ;; # shared object as lib.so archive member only
yes,*) enable_static=no ;; # shared object in lib.a archive as well
esac
fi
;;
esac
AC_MSG_RESULT([$enable_shared])
AC_MSG_CHECKING([whether to build static libraries])
# Make sure either enable_shared or enable_static is yes.
test yes = "$enable_shared" || enable_static=yes
AC_MSG_RESULT([$enable_static])
_LT_CONFIG($1)
fi
AC_LANG_POP
CC=$lt_save_CC
])# _LT_LANG_C_CONFIG
# _LT_LANG_CXX_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for a C++ compiler are suitably
# defined. These variables are subsequently used by _LT_CONFIG to write
# the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_CXX_CONFIG],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_PATH_MANIFEST_TOOL])dnl
if test -n "$CXX" && ( test no != "$CXX" &&
( (test g++ = "$CXX" && `g++ -v >/dev/null 2>&1` ) ||
(test g++ != "$CXX"))); then
AC_PROG_CXXCPP
else
_lt_caught_CXX_error=yes
fi
AC_LANG_PUSH(C++)
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(compiler_needs_object, $1)=no
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
_LT_TAGVAR(no_undefined_flag, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
# Source file extension for C++ test sources.
ac_ext=cpp
# Object file extension for compiled C++ test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# No sense in running all these tests if we already determined that
# the CXX compiler isn't working. Some variables (like enable_shared)
# are currently assumed to apply to all compilers on this platform,
# and will be corrupted by setting them based on a non-working compiler.
if test yes != "$_lt_caught_CXX_error"; then
# Code to be used in simple compile tests
lt_simple_compile_test_code="int some_variable = 0;"
# Code to be used in simple link tests
lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }'
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_LD=$LD
lt_save_GCC=$GCC
GCC=$GXX
lt_save_with_gnu_ld=$with_gnu_ld
lt_save_path_LD=$lt_cv_path_LD
if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then
lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx
else
$as_unset lt_cv_prog_gnu_ld
fi
if test -n "${lt_cv_path_LDCXX+set}"; then
lt_cv_path_LD=$lt_cv_path_LDCXX
else
$as_unset lt_cv_path_LD
fi
test -z "${LDCXX+set}" || LD=$LDCXX
CC=${CXX-"c++"}
CFLAGS=$CXXFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
if test -n "$compiler"; then
# We don't want -fno-exception when compiling C++ code, so set the
# no_builtin_flag separately
if test yes = "$GXX"; then
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin'
else
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=
fi
if test yes = "$GXX"; then
# Set up default GNU C++ configuration
LT_PATH_LD
# Check if GNU C++ uses GNU ld as the underlying linker, since the
# archiving commands below assume that GNU ld is being used.
if test yes = "$with_gnu_ld"; then
_LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
# If archive_cmds runs LD, not CC, wlarc should be empty
# XXX I think wlarc can be eliminated in ltcf-cxx, but I need to
# investigate it a little bit more. (MM)
wlarc='$wl'
# ancient GNU ld didn't support --whole-archive et. al.
if eval "`$CC -print-prog-name=ld` --help 2>&1" |
$GREP 'no-whole-archive' > /dev/null; then
_LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
else
_LT_TAGVAR(whole_archive_flag_spec, $1)=
fi
else
with_gnu_ld=no
wlarc=
# A generic and very simple default shared library creation
# command for GNU C++ for the case where it uses the native
# linker, instead of GNU ld. If possible, this setting should
# overridden to take advantage of the native linker features on
# the platform it is being used on.
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib'
fi
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
else
GXX=no
with_gnu_ld=no
wlarc=
fi
# PORTME: fill in a description of your system's C++ link characteristics
AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
_LT_TAGVAR(ld_shlibs, $1)=yes
case $host_os in
aix3*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
aix[[4-9]]*)
if test ia64 = "$host_cpu"; then
# On IA64, the linker does run time linking by default, so we don't
# have to do anything special.
aix_use_runtimelinking=no
exp_sym_flag='-Bexport'
no_entry_flag=
else
aix_use_runtimelinking=no
# Test if we are trying to use run time linking or normal
# AIX style linking. If -brtl is somewhere in LDFLAGS, we
# have runtime linking enabled, and use it for executables.
# For shared libraries, we enable/disable runtime linking
# depending on the kind of the shared library created -
# when "with_aix_soname,aix_use_runtimelinking" is:
# "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables
# "aix,yes" lib.so shared, rtl:yes, for executables
# lib.a static archive
# "both,no" lib.so.V(shr.o) shared, rtl:yes
# lib.a(lib.so.V) shared, rtl:no, for executables
# "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a(lib.so.V) shared, rtl:no
# "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a static archive
case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*)
for ld_flag in $LDFLAGS; do
case $ld_flag in
*-brtl*)
aix_use_runtimelinking=yes
break
;;
esac
done
if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then
# With aix-soname=svr4, we create the lib.so.V shared archives only,
# so we don't have lib.a shared libs to link our executables.
# We have to force runtime linking in this case.
aix_use_runtimelinking=yes
LDFLAGS="$LDFLAGS -Wl,-brtl"
fi
;;
esac
exp_sym_flag='-bexport'
no_entry_flag='-bnoentry'
fi
# When large executables or shared objects are built, AIX ld can
# have problems creating the table of contents. If linking a library
# or program results in "error TOC overflow" add -mminimal-toc to
# CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
# enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
_LT_TAGVAR(archive_cmds, $1)=''
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='$wl-f,'
case $with_aix_soname,$aix_use_runtimelinking in
aix,*) ;; # no import file
svr4,* | *,yes) # use import file
# The Import File defines what to hardcode.
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
;;
esac
if test yes = "$GXX"; then
case $host_os in aix4.[[012]]|aix4.[[012]].*)
# We only want to do this on AIX 4.2 and lower, the check
# below for broken collect2 doesn't work under 4.3+
collect2name=`$CC -print-prog-name=collect2`
if test -f "$collect2name" &&
strings "$collect2name" | $GREP resolve_lib_name >/dev/null
then
# We have reworked collect2
:
else
# We have old collect2
_LT_TAGVAR(hardcode_direct, $1)=unsupported
# It fails to find uninstalled libraries when the uninstalled
# path is not listed in the libpath. Setting hardcode_minus_L
# to unsupported forces relinking
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=
fi
esac
shared_flag='-shared'
if test yes = "$aix_use_runtimelinking"; then
shared_flag=$shared_flag' $wl-G'
fi
# Need to ensure runtime linking is disabled for the traditional
# shared library, or the linker may eventually find shared libraries
# /with/ Import File - we do not want to mix them.
shared_flag_aix='-shared'
shared_flag_svr4='-shared $wl-G'
else
# not using gcc
if test ia64 = "$host_cpu"; then
# VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
# chokes on -Wl,-G. The following line is correct:
shared_flag='-G'
else
if test yes = "$aix_use_runtimelinking"; then
shared_flag='$wl-G'
else
shared_flag='$wl-bM:SRE'
fi
shared_flag_aix='$wl-bM:SRE'
shared_flag_svr4='$wl-G'
fi
fi
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall'
# It seems that -bexpall does not export symbols beginning with
# underscore (_), so it is better to generate a list of symbols to
# export.
_LT_TAGVAR(always_export_symbols, $1)=yes
if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then
# Warning - without using the other runtime loading flags (-brtl),
# -berok will link without error, but may produce a broken library.
# The "-G" linker flag allows undefined symbols.
_LT_TAGVAR(no_undefined_flag, $1)='-bernotok'
# Determine the default libpath from the value encoded in an empty
# executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag
else
if test ia64 = "$host_cpu"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib'
_LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs"
_LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols"
else
# Determine the default libpath from the value encoded in an
# empty executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
# Warning - without using the other run time loading flags,
# -berok will link without error, but may produce a broken library.
_LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok'
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok'
if test yes = "$with_gnu_ld"; then
# We only use this code for GNU lds that support --whole-archive.
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive'
else
# Exported symbols can be pulled into shared objects from archives
_LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
_LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d'
# -brtl affects multiple linker settings, -berok does not and is overridden later
compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`'
if test svr4 != "$with_aix_soname"; then
# This is similar to how AIX traditionally builds its shared
# libraries. Need -bnortl late, we may have -brtl in LDFLAGS.
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname'
fi
if test aix != "$with_aix_soname"; then
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp'
else
# used by -dlpreopen to get the symbols
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir'
fi
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d'
fi
fi
;;
beos*)
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
# Joseph Beckenbach says some releases of gcc
# support --undefined. This deserves some investigation. FIXME
_LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
chorus*)
case $cc_basename in
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
cygwin* | mingw* | pw32* | cegcc*)
case $GXX,$cc_basename in
,cl* | no,cl*)
# Native MSVC
# hardcode_libdir_flag_spec is actually meaningless, as there is
# no search path for DLLs.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='@'
# Tell ltmain to make .lib files, not .a files.
libext=lib
# Tell ltmain to make .dll files, not .so files.
shrext_cmds=.dll
# FIXME: Setting linknames here is a bad hack.
_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames='
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp "$export_symbols" "$output_objdir/$soname.def";
echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp";
else
$SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp;
fi~
$CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
linknames='
# The linker will not automatically build a static lib if we build a DLL.
# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
# Don't use ranlib
_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
lt_tool_outputfile="@TOOL_OUTPUT@"~
case $lt_outputfile in
*.exe|*.EXE) ;;
*)
lt_outputfile=$lt_outputfile.exe
lt_tool_outputfile=$lt_tool_outputfile.exe
;;
esac~
func_to_tool_file "$lt_outputfile"~
if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then
$MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
$RM "$lt_outputfile.manifest";
fi'
;;
*)
# g++
# _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
# as there is no search path for DLLs.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols'
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
# If the export-symbols file already is a .def file, use it as
# is; otherwise, prepend EXPORTS...
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp $export_symbols $output_objdir/$soname.def;
else
echo EXPORTS > $output_objdir/$soname.def;
cat $export_symbols >> $output_objdir/$soname.def;
fi~
$CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
darwin* | rhapsody*)
_LT_DARWIN_LINKER_FEATURES($1)
;;
os2*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
shrext_cmds=.dll
_LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
prefix_cmds="$SED"~
if test EXPORTS = "`$SED 1q $export_symbols`"; then
prefix_cmds="$prefix_cmds -e 1d";
fi~
prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
dgux*)
case $cc_basename in
ec++*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
ghcx*)
# Green Hills C++ Compiler
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
freebsd2.*)
# C++ shared libraries reported to be fairly broken before
# switch to ELF
_LT_TAGVAR(ld_shlibs, $1)=no
;;
freebsd-elf*)
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
;;
freebsd* | dragonfly*)
# FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF
# conventions
_LT_TAGVAR(ld_shlibs, $1)=yes
;;
haiku*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
hpux9*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
# but as the default
# location of the library.
case $cc_basename in
CC*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
aCC*)
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
;;
*)
if test yes = "$GXX"; then
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
else
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
hpux10*|hpux11*)
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
case $host_cpu in
hppa*64*|ia64*)
;;
*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
;;
esac
fi
case $host_cpu in
hppa*64*|ia64*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
# but as the default
# location of the library.
;;
esac
case $cc_basename in
CC*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
aCC*)
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
esac
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
;;
*)
if test yes = "$GXX"; then
if test no = "$with_gnu_ld"; then
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
esac
fi
else
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
interix[[3-9]]*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc.
# Instead, shared libraries are loaded at an image base (0x10000000 by
# default) and relocated if they conflict, which is a slow very memory
# consuming and fragmenting process. To avoid this, we pick a random,
# 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link
# time. Moving up from 0x10000000 also allows more sbrk(2) space.
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
;;
irix5* | irix6*)
case $cc_basename in
CC*)
# SGI C++
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
# Archives containing C++ object files must be created using
# "CC -ar", where "CC" is the IRIX C++ compiler. This is
# necessary to make sure instantiated templates are included
# in the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs'
;;
*)
if test yes = "$GXX"; then
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` -o $lib'
fi
fi
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
esac
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(inherit_rpath, $1)=yes
;;
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
case $cc_basename in
KCC*)
# Kuck and Associates, Inc. (KAI) C++ Compiler
# KCC will only create a shared library if the output file
# ends with ".so" (or ".sl" for HP-UX), so rename the library
# to its proper name (with version) after linking.
_LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib $wl-retain-symbols-file,$export_symbols; mv \$templib $lib'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
# Archives containing C++ object files must be created using
# "CC -Bstatic", where "CC" is the KAI C++ compiler.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs'
;;
icpc* | ecpc* )
# Intel C++
with_gnu_ld=yes
# version 8.0 and above of icpc choke on multiply defined symbols
# if we add $predep_objects and $postdep_objects, however 7.1 and
# earlier do not add the objects themselves.
case `$CC -V 2>&1` in
*"Version 7."*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
*) # Version 8.0 or newer
tmp_idyn=
case $host_cpu in
ia64*) tmp_idyn=' -i_dynamic';;
esac
_LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
esac
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive'
;;
pgCC* | pgcpp*)
# Portland Group C++ compiler
case `$CC -V` in
*pgCC\ [[1-5]].* | *pgcpp\ [[1-5]].*)
_LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
_LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
$RANLIB $oldlib'
_LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
*) # Version 6 and above use weak symbols
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
esac
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl--rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
;;
cxx*)
# Compaq C++
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib $wl-retain-symbols-file $wl$export_symbols'
runpath_var=LD_RUN_PATH
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed'
;;
xl* | mpixl* | bgxl*)
# IBM XL 8.0 on PPC, with GNU ld
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
_LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
if test yes = "$supports_anon_versioning"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
echo "local: *; };" >> $output_objdir/$libname.ver~
$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib'
fi
;;
*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*)
# Sun C++ 5.9
_LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
_LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file $wl$export_symbols'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
_LT_TAGVAR(compiler_needs_object, $1)=yes
# Not sure whether something based on
# $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1
# would be better.
output_verbose_link_cmd='func_echo_all'
# Archives containing C++ object files must be created using
# "CC -xar", where "CC" is the Sun C++ compiler. This is
# necessary to make sure instantiated templates are included
# in the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs'
;;
esac
;;
esac
;;
lynxos*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
m88k*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
mvs*)
case $cc_basename in
cxx*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
netbsd*)
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags'
wlarc=
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
fi
# Workaround some broken pre-1.5 toolchains
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"'
;;
*nto* | *qnx*)
_LT_TAGVAR(ld_shlibs, $1)=yes
;;
openbsd* | bitrig*)
if test -f /usr/libexec/ld.so; then
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file,$export_symbols -o $lib'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
_LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
fi
output_verbose_link_cmd=func_echo_all
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
osf3* | osf4* | osf5*)
case $cc_basename in
KCC*)
# Kuck and Associates, Inc. (KAI) C++ Compiler
# KCC will only create a shared library if the output file
# ends with ".so" (or ".sl" for HP-UX), so rename the library
# to its proper name (with version) after linking.
_LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Archives containing C++ object files must be created using
# the KAI C++ compiler.
case $host in
osf3*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;;
*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' ;;
esac
;;
RCC*)
# Rational C++ 2.4.1
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
cxx*)
case $host in
osf3*)
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $soname `test -n "$verstring" && func_echo_all "$wl-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
;;
*)
_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~
echo "-hidden">> $lib.exp~
$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname $wl-input $wl$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~
$RM $lib.exp'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
;;
esac
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
;;
*)
if test yes,no = "$GXX,$with_gnu_ld"; then
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
case $host in
osf3*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
;;
esac
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
else
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
psos*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
sunos4*)
case $cc_basename in
CC*)
# Sun C++ 4.x
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
lcc*)
# Lucid
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
solaris*)
case $cc_basename in
CC* | sunCC*)
# Sun C++ 4.2, 5.x and Centerline C++
_LT_TAGVAR(archive_cmds_need_lc,$1)=yes
_LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
_LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -G$allow_undefined_flag $wl-M $wl$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
case $host_os in
solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
*)
# The compiler driver will combine and reorder linker options,
# but understands '-z linker_flag'.
# Supported since Solaris 2.6 (maybe 2.5.1?)
_LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract'
;;
esac
_LT_TAGVAR(link_all_deplibs, $1)=yes
output_verbose_link_cmd='func_echo_all'
# Archives containing C++ object files must be created using
# "CC -xar", where "CC" is the Sun C++ compiler. This is
# necessary to make sure instantiated templates are included
# in the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs'
;;
gcx*)
# Green Hills C++ Compiler
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib'
# The C++ compiler must be used to create the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs'
;;
*)
# GNU C++ compiler with Solaris linker
if test yes,no = "$GXX,$with_gnu_ld"; then
_LT_TAGVAR(no_undefined_flag, $1)=' $wl-z ${wl}defs'
if $CC --version | $GREP -v '^2\.7' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -shared $pic_flag -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
else
# g++ 2.7 appears to require '-G' NOT '-shared' on this
# platform.
_LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -G -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $wl$libdir'
case $host_os in
solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
*)
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract'
;;
esac
fi
;;
esac
;;
sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*)
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
runpath_var='LD_RUN_PATH'
case $cc_basename in
CC*)
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
;;
sysv5* | sco3.2v5* | sco5v6*)
# Note: We CANNOT use -z defs as we might desire, because we do not
# link with -lc, and that would cause any symbols used from libc to
# always be unresolved, which means just about no library would
# ever link correctly. If we're not using GNU ld we use -z text
# though, which does catch some bad symbols but isn't as heavy-handed
# as -z defs.
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport'
runpath_var='LD_RUN_PATH'
case $cc_basename in
CC*)
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(old_archive_cmds, $1)='$CC -Tprelink_objects $oldobjs~
'"$_LT_TAGVAR(old_archive_cmds, $1)"
_LT_TAGVAR(reload_cmds, $1)='$CC -Tprelink_objects $reload_objs~
'"$_LT_TAGVAR(reload_cmds, $1)"
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
;;
tandem*)
case $cc_basename in
NCC*)
# NonStop-UX NCC 3.20
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
vxworks*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)])
test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no
_LT_TAGVAR(GCC, $1)=$GXX
_LT_TAGVAR(LD, $1)=$LD
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
_LT_SYS_HIDDEN_LIBDEPS($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi # test -n "$compiler"
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
LDCXX=$LD
LD=$lt_save_LD
GCC=$lt_save_GCC
with_gnu_ld=$lt_save_with_gnu_ld
lt_cv_path_LDCXX=$lt_cv_path_LD
lt_cv_path_LD=$lt_save_path_LD
lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld
lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld
fi # test yes != "$_lt_caught_CXX_error"
AC_LANG_POP
])# _LT_LANG_CXX_CONFIG
# _LT_FUNC_STRIPNAME_CNF
# ----------------------
# func_stripname_cnf prefix suffix name
# strip PREFIX and SUFFIX off of NAME.
# PREFIX and SUFFIX must not contain globbing or regex special
# characters, hashes, percent signs, but SUFFIX may contain a leading
# dot (in which case that matches only a dot).
#
# This function is identical to the (non-XSI) version of func_stripname,
# except this one can be used by m4 code that may be executed by configure,
# rather than the libtool script.
m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl
AC_REQUIRE([_LT_DECL_SED])
AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])
func_stripname_cnf ()
{
case @S|@2 in
.*) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%\\\\@S|@2\$%%"`;;
*) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%@S|@2\$%%"`;;
esac
} # func_stripname_cnf
])# _LT_FUNC_STRIPNAME_CNF
# _LT_SYS_HIDDEN_LIBDEPS([TAGNAME])
# ---------------------------------
# Figure out "hidden" library dependencies from verbose
# compiler output when linking a shared library.
# Parse the compiler output and extract the necessary
# objects, libraries and library flags.
m4_defun([_LT_SYS_HIDDEN_LIBDEPS],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl
# Dependencies to place before and after the object being linked:
_LT_TAGVAR(predep_objects, $1)=
_LT_TAGVAR(postdep_objects, $1)=
_LT_TAGVAR(predeps, $1)=
_LT_TAGVAR(postdeps, $1)=
_LT_TAGVAR(compiler_lib_search_path, $1)=
dnl we can't use the lt_simple_compile_test_code here,
dnl because it contains code intended for an executable,
dnl not a library. It's possible we should let each
dnl tag define a new lt_????_link_test_code variable,
dnl but it's only used here...
m4_if([$1], [], [cat > conftest.$ac_ext <<_LT_EOF
int a;
void foo (void) { a = 0; }
_LT_EOF
], [$1], [CXX], [cat > conftest.$ac_ext <<_LT_EOF
class Foo
{
public:
Foo (void) { a = 0; }
private:
int a;
};
_LT_EOF
], [$1], [F77], [cat > conftest.$ac_ext <<_LT_EOF
subroutine foo
implicit none
integer*4 a
a=0
return
end
_LT_EOF
], [$1], [FC], [cat > conftest.$ac_ext <<_LT_EOF
subroutine foo
implicit none
integer a
a=0
return
end
_LT_EOF
], [$1], [GCJ], [cat > conftest.$ac_ext <<_LT_EOF
public class foo {
private int a;
public void bar (void) {
a = 0;
}
};
_LT_EOF
], [$1], [GO], [cat > conftest.$ac_ext <<_LT_EOF
package foo
func foo() {
}
_LT_EOF
])
_lt_libdeps_save_CFLAGS=$CFLAGS
case "$CC $CFLAGS " in #(
*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
*\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;;
esac
dnl Parse the compiler output and extract the necessary
dnl objects, libraries and library flags.
if AC_TRY_EVAL(ac_compile); then
# Parse the compiler output and extract the necessary
# objects, libraries and library flags.
# Sentinel used to keep track of whether or not we are before
# the conftest object file.
pre_test_object_deps_done=no
for p in `eval "$output_verbose_link_cmd"`; do
case $prev$p in
-L* | -R* | -l*)
# Some compilers place space between "-{L,R}" and the path.
# Remove the space.
if test x-L = "$p" ||
test x-R = "$p"; then
prev=$p
continue
fi
# Expand the sysroot to ease extracting the directories later.
if test -z "$prev"; then
case $p in
-L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
-R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
-l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
esac
fi
case $p in
=*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
esac
if test no = "$pre_test_object_deps_done"; then
case $prev in
-L | -R)
# Internal compiler library paths should come after those
# provided the user. The postdeps already come after the
# user supplied libs so there is no need to process them.
if test -z "$_LT_TAGVAR(compiler_lib_search_path, $1)"; then
_LT_TAGVAR(compiler_lib_search_path, $1)=$prev$p
else
_LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} $prev$p"
fi
;;
# The "-l" case would never come before the object being
# linked, so don't bother handling this case.
esac
else
if test -z "$_LT_TAGVAR(postdeps, $1)"; then
_LT_TAGVAR(postdeps, $1)=$prev$p
else
_LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} $prev$p"
fi
fi
prev=
;;
*.lto.$objext) ;; # Ignore GCC LTO objects
*.$objext)
# This assumes that the test object file only shows up
# once in the compiler output.
if test "$p" = "conftest.$objext"; then
pre_test_object_deps_done=yes
continue
fi
if test no = "$pre_test_object_deps_done"; then
if test -z "$_LT_TAGVAR(predep_objects, $1)"; then
_LT_TAGVAR(predep_objects, $1)=$p
else
_LT_TAGVAR(predep_objects, $1)="$_LT_TAGVAR(predep_objects, $1) $p"
fi
else
if test -z "$_LT_TAGVAR(postdep_objects, $1)"; then
_LT_TAGVAR(postdep_objects, $1)=$p
else
_LT_TAGVAR(postdep_objects, $1)="$_LT_TAGVAR(postdep_objects, $1) $p"
fi
fi
;;
*) ;; # Ignore the rest.
esac
done
# Clean up.
rm -f a.out a.exe
else
echo "libtool.m4: error: problem compiling $1 test program"
fi
$RM -f confest.$objext
CFLAGS=$_lt_libdeps_save_CFLAGS
# PORTME: override above test on systems where it is broken
m4_if([$1], [CXX],
[case $host_os in
interix[[3-9]]*)
# Interix 3.5 installs completely hosed .la files for C++, so rather than
# hack all around it, let's just trust "g++" to DTRT.
_LT_TAGVAR(predep_objects,$1)=
_LT_TAGVAR(postdep_objects,$1)=
_LT_TAGVAR(postdeps,$1)=
;;
esac
])
case " $_LT_TAGVAR(postdeps, $1) " in
*" -lc "*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;;
esac
_LT_TAGVAR(compiler_lib_search_dirs, $1)=
if test -n "${_LT_TAGVAR(compiler_lib_search_path, $1)}"; then
_LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | $SED -e 's! -L! !g' -e 's!^ !!'`
fi
_LT_TAGDECL([], [compiler_lib_search_dirs], [1],
[The directories searched by this compiler when creating a shared library])
_LT_TAGDECL([], [predep_objects], [1],
[Dependencies to place before and after the objects being linked to
create a shared library])
_LT_TAGDECL([], [postdep_objects], [1])
_LT_TAGDECL([], [predeps], [1])
_LT_TAGDECL([], [postdeps], [1])
_LT_TAGDECL([], [compiler_lib_search_path], [1],
[The library search path used internally by the compiler when linking
a shared library])
])# _LT_SYS_HIDDEN_LIBDEPS
# _LT_LANG_F77_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for a Fortran 77 compiler are
# suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_F77_CONFIG],
[AC_LANG_PUSH(Fortran 77)
if test -z "$F77" || test no = "$F77"; then
_lt_disable_F77=yes
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
_LT_TAGVAR(no_undefined_flag, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
# Source file extension for f77 test sources.
ac_ext=f
# Object file extension for compiled f77 test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# No sense in running all these tests if we already determined that
# the F77 compiler isn't working. Some variables (like enable_shared)
# are currently assumed to apply to all compilers on this platform,
# and will be corrupted by setting them based on a non-working compiler.
if test yes != "$_lt_disable_F77"; then
# Code to be used in simple compile tests
lt_simple_compile_test_code="\
subroutine t
return
end
"
# Code to be used in simple link tests
lt_simple_link_test_code="\
program t
end
"
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_GCC=$GCC
lt_save_CFLAGS=$CFLAGS
CC=${F77-"f77"}
CFLAGS=$FFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
GCC=$G77
if test -n "$compiler"; then
AC_MSG_CHECKING([if libtool supports shared libraries])
AC_MSG_RESULT([$can_build_shared])
AC_MSG_CHECKING([whether to build shared libraries])
test no = "$can_build_shared" && enable_shared=no
# On AIX, shared libraries and static libraries use the same namespace, and
# are all built from PIC.
case $host_os in
aix3*)
test yes = "$enable_shared" && enable_static=no
if test -n "$RANLIB"; then
archive_cmds="$archive_cmds~\$RANLIB \$lib"
postinstall_cmds='$RANLIB $lib'
fi
;;
aix[[4-9]]*)
if test ia64 != "$host_cpu"; then
case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
yes,aix,yes) ;; # shared object as lib.so file only
yes,svr4,*) ;; # shared object as lib.so archive member only
yes,*) enable_static=no ;; # shared object in lib.a archive as well
esac
fi
;;
esac
AC_MSG_RESULT([$enable_shared])
AC_MSG_CHECKING([whether to build static libraries])
# Make sure either enable_shared or enable_static is yes.
test yes = "$enable_shared" || enable_static=yes
AC_MSG_RESULT([$enable_static])
_LT_TAGVAR(GCC, $1)=$G77
_LT_TAGVAR(LD, $1)=$LD
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi # test -n "$compiler"
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
fi # test yes != "$_lt_disable_F77"
AC_LANG_POP
])# _LT_LANG_F77_CONFIG
# _LT_LANG_FC_CONFIG([TAG])
# -------------------------
# Ensure that the configuration variables for a Fortran compiler are
# suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_FC_CONFIG],
[AC_LANG_PUSH(Fortran)
if test -z "$FC" || test no = "$FC"; then
_lt_disable_FC=yes
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
_LT_TAGVAR(no_undefined_flag, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
# Source file extension for fc test sources.
ac_ext=${ac_fc_srcext-f}
# Object file extension for compiled fc test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# No sense in running all these tests if we already determined that
# the FC compiler isn't working. Some variables (like enable_shared)
# are currently assumed to apply to all compilers on this platform,
# and will be corrupted by setting them based on a non-working compiler.
if test yes != "$_lt_disable_FC"; then
# Code to be used in simple compile tests
lt_simple_compile_test_code="\
subroutine t
return
end
"
# Code to be used in simple link tests
lt_simple_link_test_code="\
program t
end
"
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_GCC=$GCC
lt_save_CFLAGS=$CFLAGS
CC=${FC-"f95"}
CFLAGS=$FCFLAGS
compiler=$CC
GCC=$ac_cv_fc_compiler_gnu
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
if test -n "$compiler"; then
AC_MSG_CHECKING([if libtool supports shared libraries])
AC_MSG_RESULT([$can_build_shared])
AC_MSG_CHECKING([whether to build shared libraries])
test no = "$can_build_shared" && enable_shared=no
# On AIX, shared libraries and static libraries use the same namespace, and
# are all built from PIC.
case $host_os in
aix3*)
test yes = "$enable_shared" && enable_static=no
if test -n "$RANLIB"; then
archive_cmds="$archive_cmds~\$RANLIB \$lib"
postinstall_cmds='$RANLIB $lib'
fi
;;
aix[[4-9]]*)
if test ia64 != "$host_cpu"; then
case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
yes,aix,yes) ;; # shared object as lib.so file only
yes,svr4,*) ;; # shared object as lib.so archive member only
yes,*) enable_static=no ;; # shared object in lib.a archive as well
esac
fi
;;
esac
AC_MSG_RESULT([$enable_shared])
AC_MSG_CHECKING([whether to build static libraries])
# Make sure either enable_shared or enable_static is yes.
test yes = "$enable_shared" || enable_static=yes
AC_MSG_RESULT([$enable_static])
_LT_TAGVAR(GCC, $1)=$ac_cv_fc_compiler_gnu
_LT_TAGVAR(LD, $1)=$LD
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
_LT_SYS_HIDDEN_LIBDEPS($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi # test -n "$compiler"
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
fi # test yes != "$_lt_disable_FC"
AC_LANG_POP
])# _LT_LANG_FC_CONFIG
# _LT_LANG_GCJ_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for the GNU Java Compiler compiler
# are suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_GCJ_CONFIG],
[AC_REQUIRE([LT_PROG_GCJ])dnl
AC_LANG_SAVE
# Source file extension for Java test sources.
ac_ext=java
# Object file extension for compiled Java test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code="class foo {}"
# Code to be used in simple link tests
lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }'
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_GCC=$GCC
GCC=yes
CC=${GCJ-"gcj"}
CFLAGS=$GCJFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_TAGVAR(LD, $1)=$LD
_LT_CC_BASENAME([$compiler])
# GCJ did not exist at the time GCC didn't implicitly link libc in.
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
if test -n "$compiler"; then
_LT_COMPILER_NO_RTTI($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi
AC_LANG_RESTORE
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
])# _LT_LANG_GCJ_CONFIG
# _LT_LANG_GO_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for the GNU Go compiler
# are suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_GO_CONFIG],
[AC_REQUIRE([LT_PROG_GO])dnl
AC_LANG_SAVE
# Source file extension for Go test sources.
ac_ext=go
# Object file extension for compiled Go test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code="package main; func main() { }"
# Code to be used in simple link tests
lt_simple_link_test_code='package main; func main() { }'
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_GCC=$GCC
GCC=yes
CC=${GOC-"gccgo"}
CFLAGS=$GOFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_TAGVAR(LD, $1)=$LD
_LT_CC_BASENAME([$compiler])
# Go did not exist at the time GCC didn't implicitly link libc in.
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
if test -n "$compiler"; then
_LT_COMPILER_NO_RTTI($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi
AC_LANG_RESTORE
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
])# _LT_LANG_GO_CONFIG
# _LT_LANG_RC_CONFIG([TAG])
# -------------------------
# Ensure that the configuration variables for the Windows resource compiler
# are suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_RC_CONFIG],
[AC_REQUIRE([LT_PROG_RC])dnl
AC_LANG_SAVE
# Source file extension for RC test sources.
ac_ext=rc
# Object file extension for compiled RC test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }'
# Code to be used in simple link tests
lt_simple_link_test_code=$lt_simple_compile_test_code
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_GCC=$GCC
GCC=
CC=${RC-"windres"}
CFLAGS=
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes
if test -n "$compiler"; then
:
_LT_CONFIG($1)
fi
GCC=$lt_save_GCC
AC_LANG_RESTORE
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
])# _LT_LANG_RC_CONFIG
# LT_PROG_GCJ
# -----------
AC_DEFUN([LT_PROG_GCJ],
[m4_ifdef([AC_PROG_GCJ], [AC_PROG_GCJ],
[m4_ifdef([A][M_PROG_GCJ], [A][M_PROG_GCJ],
[AC_CHECK_TOOL(GCJ, gcj,)
test set = "${GCJFLAGS+set}" || GCJFLAGS="-g -O2"
AC_SUBST(GCJFLAGS)])])[]dnl
])
# Old name:
AU_ALIAS([LT_AC_PROG_GCJ], [LT_PROG_GCJ])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([LT_AC_PROG_GCJ], [])
# LT_PROG_GO
# ----------
AC_DEFUN([LT_PROG_GO],
[AC_CHECK_TOOL(GOC, gccgo,)
])
# LT_PROG_RC
# ----------
AC_DEFUN([LT_PROG_RC],
[AC_CHECK_TOOL(RC, windres,)
])
# Old name:
AU_ALIAS([LT_AC_PROG_RC], [LT_PROG_RC])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([LT_AC_PROG_RC], [])
# _LT_DECL_EGREP
# --------------
# If we don't have a new enough Autoconf to choose the best grep
# available, choose the one first in the user's PATH.
m4_defun([_LT_DECL_EGREP],
[AC_REQUIRE([AC_PROG_EGREP])dnl
AC_REQUIRE([AC_PROG_FGREP])dnl
test -z "$GREP" && GREP=grep
_LT_DECL([], [GREP], [1], [A grep program that handles long lines])
_LT_DECL([], [EGREP], [1], [An ERE matcher])
_LT_DECL([], [FGREP], [1], [A literal string matcher])
dnl Non-bleeding-edge autoconf doesn't subst GREP, so do it here too
AC_SUBST([GREP])
])
# _LT_DECL_OBJDUMP
# --------------
# If we don't have a new enough Autoconf to choose the best objdump
# available, choose the one first in the user's PATH.
m4_defun([_LT_DECL_OBJDUMP],
[AC_CHECK_TOOL(OBJDUMP, objdump, false)
test -z "$OBJDUMP" && OBJDUMP=objdump
_LT_DECL([], [OBJDUMP], [1], [An object symbol dumper])
AC_SUBST([OBJDUMP])
])
# _LT_DECL_DLLTOOL
# ----------------
# Ensure DLLTOOL variable is set.
m4_defun([_LT_DECL_DLLTOOL],
[AC_CHECK_TOOL(DLLTOOL, dlltool, false)
test -z "$DLLTOOL" && DLLTOOL=dlltool
_LT_DECL([], [DLLTOOL], [1], [DLL creation program])
AC_SUBST([DLLTOOL])
])
# _LT_DECL_SED
# ------------
# Check for a fully-functional sed program, that truncates
# as few characters as possible. Prefer GNU sed if found.
m4_defun([_LT_DECL_SED],
[AC_PROG_SED
test -z "$SED" && SED=sed
Xsed="$SED -e 1s/^X//"
_LT_DECL([], [SED], [1], [A sed program that does not truncate output])
_LT_DECL([], [Xsed], ["\$SED -e 1s/^X//"],
[Sed that helps us avoid accidentally triggering echo(1) options like -n])
])# _LT_DECL_SED
m4_ifndef([AC_PROG_SED], [
############################################################
# NOTE: This macro has been submitted for inclusion into #
# GNU Autoconf as AC_PROG_SED. When it is available in #
# a released version of Autoconf we should remove this #
# macro and use it instead. #
############################################################
m4_defun([AC_PROG_SED],
[AC_MSG_CHECKING([for a sed that does not truncate output])
AC_CACHE_VAL(lt_cv_path_SED,
[# Loop through the user's path and test for sed and gsed.
# Then use that list of sed's as ones to test for truncation.
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
for as_dir in $PATH
do
IFS=$as_save_IFS
test -z "$as_dir" && as_dir=.
for lt_ac_prog in sed gsed; do
for ac_exec_ext in '' $ac_executable_extensions; do
if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then
lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext"
fi
done
done
done
IFS=$as_save_IFS
lt_ac_max=0
lt_ac_count=0
# Add /usr/xpg4/bin/sed as it is typically found on Solaris
# along with /bin/sed that truncates output.
for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do
test ! -f "$lt_ac_sed" && continue
cat /dev/null > conftest.in
lt_ac_count=0
echo $ECHO_N "0123456789$ECHO_C" >conftest.in
# Check for GNU sed and select it if it is found.
if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then
lt_cv_path_SED=$lt_ac_sed
break
fi
while true; do
cat conftest.in conftest.in >conftest.tmp
mv conftest.tmp conftest.in
cp conftest.in conftest.nl
echo >>conftest.nl
$lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break
cmp -s conftest.out conftest.nl || break
# 10000 chars as input seems more than enough
test 10 -lt "$lt_ac_count" && break
lt_ac_count=`expr $lt_ac_count + 1`
if test "$lt_ac_count" -gt "$lt_ac_max"; then
lt_ac_max=$lt_ac_count
lt_cv_path_SED=$lt_ac_sed
fi
done
done
])
SED=$lt_cv_path_SED
AC_SUBST([SED])
AC_MSG_RESULT([$SED])
])#AC_PROG_SED
])#m4_ifndef
# Old name:
AU_ALIAS([LT_AC_PROG_SED], [AC_PROG_SED])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([LT_AC_PROG_SED], [])
# _LT_CHECK_SHELL_FEATURES
# ------------------------
# Find out whether the shell is Bourne or XSI compatible,
# or has some other useful features.
m4_defun([_LT_CHECK_SHELL_FEATURES],
[if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
lt_unset=unset
else
lt_unset=false
fi
_LT_DECL([], [lt_unset], [0], [whether the shell understands "unset"])dnl
# test EBCDIC or ASCII
case `echo X|tr X '\101'` in
A) # ASCII based system
# \n is not interpreted correctly by Solaris 8 /usr/ucb/tr
lt_SP2NL='tr \040 \012'
lt_NL2SP='tr \015\012 \040\040'
;;
*) # EBCDIC based system
lt_SP2NL='tr \100 \n'
lt_NL2SP='tr \r\n \100\100'
;;
esac
_LT_DECL([SP2NL], [lt_SP2NL], [1], [turn spaces into newlines])dnl
_LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl
])# _LT_CHECK_SHELL_FEATURES
# _LT_PATH_CONVERSION_FUNCTIONS
# -----------------------------
# Determine what file name conversion functions should be used by
# func_to_host_file (and, implicitly, by func_to_host_path). These are needed
# for certain cross-compile configurations and native mingw.
m4_defun([_LT_PATH_CONVERSION_FUNCTIONS],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
AC_MSG_CHECKING([how to convert $build file names to $host format])
AC_CACHE_VAL(lt_cv_to_host_file_cmd,
[case $host in
*-*-mingw* )
case $build in
*-*-mingw* ) # actually msys
lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
;;
*-*-cygwin* )
lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
;;
* ) # otherwise, assume *nix
lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
;;
esac
;;
*-*-cygwin* )
case $build in
*-*-mingw* ) # actually msys
lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
;;
*-*-cygwin* )
lt_cv_to_host_file_cmd=func_convert_file_noop
;;
* ) # otherwise, assume *nix
lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
;;
esac
;;
* ) # unhandled hosts (and "normal" native builds)
lt_cv_to_host_file_cmd=func_convert_file_noop
;;
esac
])
to_host_file_cmd=$lt_cv_to_host_file_cmd
AC_MSG_RESULT([$lt_cv_to_host_file_cmd])
_LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd],
[0], [convert $build file names to $host format])dnl
AC_MSG_CHECKING([how to convert $build file names to toolchain format])
AC_CACHE_VAL(lt_cv_to_tool_file_cmd,
[#assume ordinary cross tools, or native build.
lt_cv_to_tool_file_cmd=func_convert_file_noop
case $host in
*-*-mingw* )
case $build in
*-*-mingw* ) # actually msys
lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
;;
esac
;;
esac
])
to_tool_file_cmd=$lt_cv_to_tool_file_cmd
AC_MSG_RESULT([$lt_cv_to_tool_file_cmd])
_LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd],
[0], [convert $build files to toolchain format])dnl
])# _LT_PATH_CONVERSION_FUNCTIONS
liblognorm-2.0.6/m4/ltversion.m4 0000644 0001750 0001750 00000001273 13370251147 013421 0000000 0000000 # ltversion.m4 -- version numbers -*- Autoconf -*-
#
# Copyright (C) 2004, 2011-2015 Free Software Foundation, Inc.
# Written by Scott James Remnant, 2004
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
# @configure_input@
# serial 4179 ltversion.m4
# This file is part of GNU Libtool
m4_define([LT_PACKAGE_VERSION], [2.4.6])
m4_define([LT_PACKAGE_REVISION], [2.4.6])
AC_DEFUN([LTVERSION_VERSION],
[macro_version='2.4.6'
macro_revision='2.4.6'
_LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?])
_LT_DECL(, macro_revision, 0)
])
liblognorm-2.0.6/m4/ltoptions.m4 0000644 0001750 0001750 00000034262 13370251147 013433 0000000 0000000 # Helper functions for option handling. -*- Autoconf -*-
#
# Copyright (C) 2004-2005, 2007-2009, 2011-2015 Free Software
# Foundation, Inc.
# Written by Gary V. Vaughan, 2004
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
# serial 8 ltoptions.m4
# This is to help aclocal find these macros, as it can't see m4_define.
AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])])
# _LT_MANGLE_OPTION(MACRO-NAME, OPTION-NAME)
# ------------------------------------------
m4_define([_LT_MANGLE_OPTION],
[[_LT_OPTION_]m4_bpatsubst($1__$2, [[^a-zA-Z0-9_]], [_])])
# _LT_SET_OPTION(MACRO-NAME, OPTION-NAME)
# ---------------------------------------
# Set option OPTION-NAME for macro MACRO-NAME, and if there is a
# matching handler defined, dispatch to it. Other OPTION-NAMEs are
# saved as a flag.
m4_define([_LT_SET_OPTION],
[m4_define(_LT_MANGLE_OPTION([$1], [$2]))dnl
m4_ifdef(_LT_MANGLE_DEFUN([$1], [$2]),
_LT_MANGLE_DEFUN([$1], [$2]),
[m4_warning([Unknown $1 option '$2'])])[]dnl
])
# _LT_IF_OPTION(MACRO-NAME, OPTION-NAME, IF-SET, [IF-NOT-SET])
# ------------------------------------------------------------
# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise.
m4_define([_LT_IF_OPTION],
[m4_ifdef(_LT_MANGLE_OPTION([$1], [$2]), [$3], [$4])])
# _LT_UNLESS_OPTIONS(MACRO-NAME, OPTION-LIST, IF-NOT-SET)
# -------------------------------------------------------
# Execute IF-NOT-SET unless all options in OPTION-LIST for MACRO-NAME
# are set.
m4_define([_LT_UNLESS_OPTIONS],
[m4_foreach([_LT_Option], m4_split(m4_normalize([$2])),
[m4_ifdef(_LT_MANGLE_OPTION([$1], _LT_Option),
[m4_define([$0_found])])])[]dnl
m4_ifdef([$0_found], [m4_undefine([$0_found])], [$3
])[]dnl
])
# _LT_SET_OPTIONS(MACRO-NAME, OPTION-LIST)
# ----------------------------------------
# OPTION-LIST is a space-separated list of Libtool options associated
# with MACRO-NAME. If any OPTION has a matching handler declared with
# LT_OPTION_DEFINE, dispatch to that macro; otherwise complain about
# the unknown option and exit.
m4_defun([_LT_SET_OPTIONS],
[# Set options
m4_foreach([_LT_Option], m4_split(m4_normalize([$2])),
[_LT_SET_OPTION([$1], _LT_Option)])
m4_if([$1],[LT_INIT],[
dnl
dnl Simply set some default values (i.e off) if boolean options were not
dnl specified:
_LT_UNLESS_OPTIONS([LT_INIT], [dlopen], [enable_dlopen=no
])
_LT_UNLESS_OPTIONS([LT_INIT], [win32-dll], [enable_win32_dll=no
])
dnl
dnl If no reference was made to various pairs of opposing options, then
dnl we run the default mode handler for the pair. For example, if neither
dnl 'shared' nor 'disable-shared' was passed, we enable building of shared
dnl archives by default:
_LT_UNLESS_OPTIONS([LT_INIT], [shared disable-shared], [_LT_ENABLE_SHARED])
_LT_UNLESS_OPTIONS([LT_INIT], [static disable-static], [_LT_ENABLE_STATIC])
_LT_UNLESS_OPTIONS([LT_INIT], [pic-only no-pic], [_LT_WITH_PIC])
_LT_UNLESS_OPTIONS([LT_INIT], [fast-install disable-fast-install],
[_LT_ENABLE_FAST_INSTALL])
_LT_UNLESS_OPTIONS([LT_INIT], [aix-soname=aix aix-soname=both aix-soname=svr4],
[_LT_WITH_AIX_SONAME([aix])])
])
])# _LT_SET_OPTIONS
## --------------------------------- ##
## Macros to handle LT_INIT options. ##
## --------------------------------- ##
# _LT_MANGLE_DEFUN(MACRO-NAME, OPTION-NAME)
# -----------------------------------------
m4_define([_LT_MANGLE_DEFUN],
[[_LT_OPTION_DEFUN_]m4_bpatsubst(m4_toupper([$1__$2]), [[^A-Z0-9_]], [_])])
# LT_OPTION_DEFINE(MACRO-NAME, OPTION-NAME, CODE)
# -----------------------------------------------
m4_define([LT_OPTION_DEFINE],
[m4_define(_LT_MANGLE_DEFUN([$1], [$2]), [$3])[]dnl
])# LT_OPTION_DEFINE
# dlopen
# ------
LT_OPTION_DEFINE([LT_INIT], [dlopen], [enable_dlopen=yes
])
AU_DEFUN([AC_LIBTOOL_DLOPEN],
[_LT_SET_OPTION([LT_INIT], [dlopen])
AC_DIAGNOSE([obsolete],
[$0: Remove this warning and the call to _LT_SET_OPTION when you
put the 'dlopen' option into LT_INIT's first parameter.])
])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_DLOPEN], [])
# win32-dll
# ---------
# Declare package support for building win32 dll's.
LT_OPTION_DEFINE([LT_INIT], [win32-dll],
[enable_win32_dll=yes
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-cegcc*)
AC_CHECK_TOOL(AS, as, false)
AC_CHECK_TOOL(DLLTOOL, dlltool, false)
AC_CHECK_TOOL(OBJDUMP, objdump, false)
;;
esac
test -z "$AS" && AS=as
_LT_DECL([], [AS], [1], [Assembler program])dnl
test -z "$DLLTOOL" && DLLTOOL=dlltool
_LT_DECL([], [DLLTOOL], [1], [DLL creation program])dnl
test -z "$OBJDUMP" && OBJDUMP=objdump
_LT_DECL([], [OBJDUMP], [1], [Object dumper program])dnl
])# win32-dll
AU_DEFUN([AC_LIBTOOL_WIN32_DLL],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
_LT_SET_OPTION([LT_INIT], [win32-dll])
AC_DIAGNOSE([obsolete],
[$0: Remove this warning and the call to _LT_SET_OPTION when you
put the 'win32-dll' option into LT_INIT's first parameter.])
])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_WIN32_DLL], [])
# _LT_ENABLE_SHARED([DEFAULT])
# ----------------------------
# implement the --enable-shared flag, and supports the 'shared' and
# 'disable-shared' LT_INIT options.
# DEFAULT is either 'yes' or 'no'. If omitted, it defaults to 'yes'.
m4_define([_LT_ENABLE_SHARED],
[m4_define([_LT_ENABLE_SHARED_DEFAULT], [m4_if($1, no, no, yes)])dnl
AC_ARG_ENABLE([shared],
[AS_HELP_STRING([--enable-shared@<:@=PKGS@:>@],
[build shared libraries @<:@default=]_LT_ENABLE_SHARED_DEFAULT[@:>@])],
[p=${PACKAGE-default}
case $enableval in
yes) enable_shared=yes ;;
no) enable_shared=no ;;
*)
enable_shared=no
# Look at the argument we got. We use all the common list separators.
lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
for pkg in $enableval; do
IFS=$lt_save_ifs
if test "X$pkg" = "X$p"; then
enable_shared=yes
fi
done
IFS=$lt_save_ifs
;;
esac],
[enable_shared=]_LT_ENABLE_SHARED_DEFAULT)
_LT_DECL([build_libtool_libs], [enable_shared], [0],
[Whether or not to build shared libraries])
])# _LT_ENABLE_SHARED
LT_OPTION_DEFINE([LT_INIT], [shared], [_LT_ENABLE_SHARED([yes])])
LT_OPTION_DEFINE([LT_INIT], [disable-shared], [_LT_ENABLE_SHARED([no])])
# Old names:
AC_DEFUN([AC_ENABLE_SHARED],
[_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[shared])
])
AC_DEFUN([AC_DISABLE_SHARED],
[_LT_SET_OPTION([LT_INIT], [disable-shared])
])
AU_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)])
AU_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AM_ENABLE_SHARED], [])
dnl AC_DEFUN([AM_DISABLE_SHARED], [])
# _LT_ENABLE_STATIC([DEFAULT])
# ----------------------------
# implement the --enable-static flag, and support the 'static' and
# 'disable-static' LT_INIT options.
# DEFAULT is either 'yes' or 'no'. If omitted, it defaults to 'yes'.
m4_define([_LT_ENABLE_STATIC],
[m4_define([_LT_ENABLE_STATIC_DEFAULT], [m4_if($1, no, no, yes)])dnl
AC_ARG_ENABLE([static],
[AS_HELP_STRING([--enable-static@<:@=PKGS@:>@],
[build static libraries @<:@default=]_LT_ENABLE_STATIC_DEFAULT[@:>@])],
[p=${PACKAGE-default}
case $enableval in
yes) enable_static=yes ;;
no) enable_static=no ;;
*)
enable_static=no
# Look at the argument we got. We use all the common list separators.
lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
for pkg in $enableval; do
IFS=$lt_save_ifs
if test "X$pkg" = "X$p"; then
enable_static=yes
fi
done
IFS=$lt_save_ifs
;;
esac],
[enable_static=]_LT_ENABLE_STATIC_DEFAULT)
_LT_DECL([build_old_libs], [enable_static], [0],
[Whether or not to build static libraries])
])# _LT_ENABLE_STATIC
LT_OPTION_DEFINE([LT_INIT], [static], [_LT_ENABLE_STATIC([yes])])
LT_OPTION_DEFINE([LT_INIT], [disable-static], [_LT_ENABLE_STATIC([no])])
# Old names:
AC_DEFUN([AC_ENABLE_STATIC],
[_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[static])
])
AC_DEFUN([AC_DISABLE_STATIC],
[_LT_SET_OPTION([LT_INIT], [disable-static])
])
AU_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)])
AU_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AM_ENABLE_STATIC], [])
dnl AC_DEFUN([AM_DISABLE_STATIC], [])
# _LT_ENABLE_FAST_INSTALL([DEFAULT])
# ----------------------------------
# implement the --enable-fast-install flag, and support the 'fast-install'
# and 'disable-fast-install' LT_INIT options.
# DEFAULT is either 'yes' or 'no'. If omitted, it defaults to 'yes'.
m4_define([_LT_ENABLE_FAST_INSTALL],
[m4_define([_LT_ENABLE_FAST_INSTALL_DEFAULT], [m4_if($1, no, no, yes)])dnl
AC_ARG_ENABLE([fast-install],
[AS_HELP_STRING([--enable-fast-install@<:@=PKGS@:>@],
[optimize for fast installation @<:@default=]_LT_ENABLE_FAST_INSTALL_DEFAULT[@:>@])],
[p=${PACKAGE-default}
case $enableval in
yes) enable_fast_install=yes ;;
no) enable_fast_install=no ;;
*)
enable_fast_install=no
# Look at the argument we got. We use all the common list separators.
lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
for pkg in $enableval; do
IFS=$lt_save_ifs
if test "X$pkg" = "X$p"; then
enable_fast_install=yes
fi
done
IFS=$lt_save_ifs
;;
esac],
[enable_fast_install=]_LT_ENABLE_FAST_INSTALL_DEFAULT)
_LT_DECL([fast_install], [enable_fast_install], [0],
[Whether or not to optimize for fast installation])dnl
])# _LT_ENABLE_FAST_INSTALL
LT_OPTION_DEFINE([LT_INIT], [fast-install], [_LT_ENABLE_FAST_INSTALL([yes])])
LT_OPTION_DEFINE([LT_INIT], [disable-fast-install], [_LT_ENABLE_FAST_INSTALL([no])])
# Old names:
AU_DEFUN([AC_ENABLE_FAST_INSTALL],
[_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[fast-install])
AC_DIAGNOSE([obsolete],
[$0: Remove this warning and the call to _LT_SET_OPTION when you put
the 'fast-install' option into LT_INIT's first parameter.])
])
AU_DEFUN([AC_DISABLE_FAST_INSTALL],
[_LT_SET_OPTION([LT_INIT], [disable-fast-install])
AC_DIAGNOSE([obsolete],
[$0: Remove this warning and the call to _LT_SET_OPTION when you put
the 'disable-fast-install' option into LT_INIT's first parameter.])
])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_ENABLE_FAST_INSTALL], [])
dnl AC_DEFUN([AM_DISABLE_FAST_INSTALL], [])
# _LT_WITH_AIX_SONAME([DEFAULT])
# ----------------------------------
# implement the --with-aix-soname flag, and support the `aix-soname=aix'
# and `aix-soname=both' and `aix-soname=svr4' LT_INIT options. DEFAULT
# is either `aix', `both' or `svr4'. If omitted, it defaults to `aix'.
m4_define([_LT_WITH_AIX_SONAME],
[m4_define([_LT_WITH_AIX_SONAME_DEFAULT], [m4_if($1, svr4, svr4, m4_if($1, both, both, aix))])dnl
shared_archive_member_spec=
case $host,$enable_shared in
power*-*-aix[[5-9]]*,yes)
AC_MSG_CHECKING([which variant of shared library versioning to provide])
AC_ARG_WITH([aix-soname],
[AS_HELP_STRING([--with-aix-soname=aix|svr4|both],
[shared library versioning (aka "SONAME") variant to provide on AIX, @<:@default=]_LT_WITH_AIX_SONAME_DEFAULT[@:>@.])],
[case $withval in
aix|svr4|both)
;;
*)
AC_MSG_ERROR([Unknown argument to --with-aix-soname])
;;
esac
lt_cv_with_aix_soname=$with_aix_soname],
[AC_CACHE_VAL([lt_cv_with_aix_soname],
[lt_cv_with_aix_soname=]_LT_WITH_AIX_SONAME_DEFAULT)
with_aix_soname=$lt_cv_with_aix_soname])
AC_MSG_RESULT([$with_aix_soname])
if test aix != "$with_aix_soname"; then
# For the AIX way of multilib, we name the shared archive member
# based on the bitwidth used, traditionally 'shr.o' or 'shr_64.o',
# and 'shr.imp' or 'shr_64.imp', respectively, for the Import File.
# Even when GNU compilers ignore OBJECT_MODE but need '-maix64' flag,
# the AIX toolchain works better with OBJECT_MODE set (default 32).
if test 64 = "${OBJECT_MODE-32}"; then
shared_archive_member_spec=shr_64
else
shared_archive_member_spec=shr
fi
fi
;;
*)
with_aix_soname=aix
;;
esac
_LT_DECL([], [shared_archive_member_spec], [0],
[Shared archive member basename, for filename based shared library versioning on AIX])dnl
])# _LT_WITH_AIX_SONAME
LT_OPTION_DEFINE([LT_INIT], [aix-soname=aix], [_LT_WITH_AIX_SONAME([aix])])
LT_OPTION_DEFINE([LT_INIT], [aix-soname=both], [_LT_WITH_AIX_SONAME([both])])
LT_OPTION_DEFINE([LT_INIT], [aix-soname=svr4], [_LT_WITH_AIX_SONAME([svr4])])
# _LT_WITH_PIC([MODE])
# --------------------
# implement the --with-pic flag, and support the 'pic-only' and 'no-pic'
# LT_INIT options.
# MODE is either 'yes' or 'no'. If omitted, it defaults to 'both'.
m4_define([_LT_WITH_PIC],
[AC_ARG_WITH([pic],
[AS_HELP_STRING([--with-pic@<:@=PKGS@:>@],
[try to use only PIC/non-PIC objects @<:@default=use both@:>@])],
[lt_p=${PACKAGE-default}
case $withval in
yes|no) pic_mode=$withval ;;
*)
pic_mode=default
# Look at the argument we got. We use all the common list separators.
lt_save_ifs=$IFS; IFS=$IFS$PATH_SEPARATOR,
for lt_pkg in $withval; do
IFS=$lt_save_ifs
if test "X$lt_pkg" = "X$lt_p"; then
pic_mode=yes
fi
done
IFS=$lt_save_ifs
;;
esac],
[pic_mode=m4_default([$1], [default])])
_LT_DECL([], [pic_mode], [0], [What type of objects to build])dnl
])# _LT_WITH_PIC
LT_OPTION_DEFINE([LT_INIT], [pic-only], [_LT_WITH_PIC([yes])])
LT_OPTION_DEFINE([LT_INIT], [no-pic], [_LT_WITH_PIC([no])])
# Old name:
AU_DEFUN([AC_LIBTOOL_PICMODE],
[_LT_SET_OPTION([LT_INIT], [pic-only])
AC_DIAGNOSE([obsolete],
[$0: Remove this warning and the call to _LT_SET_OPTION when you
put the 'pic-only' option into LT_INIT's first parameter.])
])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_PICMODE], [])
## ----------------- ##
## LTDL_INIT Options ##
## ----------------- ##
m4_define([_LTDL_MODE], [])
LT_OPTION_DEFINE([LTDL_INIT], [nonrecursive],
[m4_define([_LTDL_MODE], [nonrecursive])])
LT_OPTION_DEFINE([LTDL_INIT], [recursive],
[m4_define([_LTDL_MODE], [recursive])])
LT_OPTION_DEFINE([LTDL_INIT], [subproject],
[m4_define([_LTDL_MODE], [subproject])])
m4_define([_LTDL_TYPE], [])
LT_OPTION_DEFINE([LTDL_INIT], [installable],
[m4_define([_LTDL_TYPE], [installable])])
LT_OPTION_DEFINE([LTDL_INIT], [convenience],
[m4_define([_LTDL_TYPE], [convenience])])
liblognorm-2.0.6/test-driver 0000755 0001750 0001750 00000011040 13370251155 013000 0000000 0000000 #! /bin/sh
# test-driver - basic testsuite driver script.
scriptversion=2013-07-13.22; # UTC
# Copyright (C) 2011-2014 Free Software Foundation, Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to or send patches to
# .
# Make unconditional expansion of undefined variables an error. This
# helps a lot in preventing typo-related bugs.
set -u
usage_error ()
{
echo "$0: $*" >&2
print_usage >&2
exit 2
}
print_usage ()
{
cat <$log_file 2>&1
estatus=$?
if test $enable_hard_errors = no && test $estatus -eq 99; then
tweaked_estatus=1
else
tweaked_estatus=$estatus
fi
case $tweaked_estatus:$expect_failure in
0:yes) col=$red res=XPASS recheck=yes gcopy=yes;;
0:*) col=$grn res=PASS recheck=no gcopy=no;;
77:*) col=$blu res=SKIP recheck=no gcopy=yes;;
99:*) col=$mgn res=ERROR recheck=yes gcopy=yes;;
*:yes) col=$lgn res=XFAIL recheck=no gcopy=yes;;
*:*) col=$red res=FAIL recheck=yes gcopy=yes;;
esac
# Report the test outcome and exit status in the logs, so that one can
# know whether the test passed or failed simply by looking at the '.log'
# file, without the need of also peaking into the corresponding '.trs'
# file (automake bug#11814).
echo "$res $test_name (exit status: $estatus)" >>$log_file
# Report outcome to console.
echo "${col}${res}${std}: $test_name"
# Register the test result, and other relevant metadata.
echo ":test-result: $res" > $trs_file
echo ":global-test-result: $res" >> $trs_file
echo ":recheck: $recheck" >> $trs_file
echo ":copy-in-global-log: $gcopy" >> $trs_file
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-time-zone: "UTC"
# time-stamp-end: "; # UTC"
# End:
liblognorm-2.0.6/COPYING.ASL20 0000644 0001750 0001750 00000021661 13273030617 012427 0000000 0000000 Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
1. You must give any other recipients of the Work or Derivative Works a copy of this License; and
2. You must cause any modified files to carry prominent notices stating that You changed the files; and
3. You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
4. If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
liblognorm-2.0.6/missing 0000755 0001750 0001750 00000015330 13370251154 012206 0000000 0000000 #! /bin/sh
# Common wrapper for a few potentially missing GNU programs.
scriptversion=2013-10-28.13; # UTC
# Copyright (C) 1996-2014 Free Software Foundation, Inc.
# Originally written by Fran,cois Pinard , 1996.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
if test $# -eq 0; then
echo 1>&2 "Try '$0 --help' for more information"
exit 1
fi
case $1 in
--is-lightweight)
# Used by our autoconf macros to check whether the available missing
# script is modern enough.
exit 0
;;
--run)
# Back-compat with the calling convention used by older automake.
shift
;;
-h|--h|--he|--hel|--help)
echo "\
$0 [OPTION]... PROGRAM [ARGUMENT]...
Run 'PROGRAM [ARGUMENT]...', returning a proper advice when this fails due
to PROGRAM being missing or too old.
Options:
-h, --help display this help and exit
-v, --version output version information and exit
Supported PROGRAM values:
aclocal autoconf autoheader autom4te automake makeinfo
bison yacc flex lex help2man
Version suffixes to PROGRAM as well as the prefixes 'gnu-', 'gnu', and
'g' are ignored when checking the name.
Send bug reports to ."
exit $?
;;
-v|--v|--ve|--ver|--vers|--versi|--versio|--version)
echo "missing $scriptversion (GNU Automake)"
exit $?
;;
-*)
echo 1>&2 "$0: unknown '$1' option"
echo 1>&2 "Try '$0 --help' for more information"
exit 1
;;
esac
# Run the given program, remember its exit status.
"$@"; st=$?
# If it succeeded, we are done.
test $st -eq 0 && exit 0
# Also exit now if we it failed (or wasn't found), and '--version' was
# passed; such an option is passed most likely to detect whether the
# program is present and works.
case $2 in --version|--help) exit $st;; esac
# Exit code 63 means version mismatch. This often happens when the user
# tries to use an ancient version of a tool on a file that requires a
# minimum version.
if test $st -eq 63; then
msg="probably too old"
elif test $st -eq 127; then
# Program was missing.
msg="missing on your system"
else
# Program was found and executed, but failed. Give up.
exit $st
fi
perl_URL=http://www.perl.org/
flex_URL=http://flex.sourceforge.net/
gnu_software_URL=http://www.gnu.org/software
program_details ()
{
case $1 in
aclocal|automake)
echo "The '$1' program is part of the GNU Automake package:"
echo "<$gnu_software_URL/automake>"
echo "It also requires GNU Autoconf, GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/autoconf>"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
autoconf|autom4te|autoheader)
echo "The '$1' program is part of the GNU Autoconf package:"
echo "<$gnu_software_URL/autoconf/>"
echo "It also requires GNU m4 and Perl in order to run:"
echo "<$gnu_software_URL/m4/>"
echo "<$perl_URL>"
;;
esac
}
give_advice ()
{
# Normalize program name to check for.
normalized_program=`echo "$1" | sed '
s/^gnu-//; t
s/^gnu//; t
s/^g//; t'`
printf '%s\n' "'$1' is $msg."
configure_deps="'configure.ac' or m4 files included by 'configure.ac'"
case $normalized_program in
autoconf*)
echo "You should only need it if you modified 'configure.ac',"
echo "or m4 files included by it."
program_details 'autoconf'
;;
autoheader*)
echo "You should only need it if you modified 'acconfig.h' or"
echo "$configure_deps."
program_details 'autoheader'
;;
automake*)
echo "You should only need it if you modified 'Makefile.am' or"
echo "$configure_deps."
program_details 'automake'
;;
aclocal*)
echo "You should only need it if you modified 'acinclude.m4' or"
echo "$configure_deps."
program_details 'aclocal'
;;
autom4te*)
echo "You might have modified some maintainer files that require"
echo "the 'autom4te' program to be rebuilt."
program_details 'autom4te'
;;
bison*|yacc*)
echo "You should only need it if you modified a '.y' file."
echo "You may want to install the GNU Bison package:"
echo "<$gnu_software_URL/bison/>"
;;
lex*|flex*)
echo "You should only need it if you modified a '.l' file."
echo "You may want to install the Fast Lexical Analyzer package:"
echo "<$flex_URL>"
;;
help2man*)
echo "You should only need it if you modified a dependency" \
"of a man page."
echo "You may want to install the GNU Help2man package:"
echo "<$gnu_software_URL/help2man/>"
;;
makeinfo*)
echo "You should only need it if you modified a '.texi' file, or"
echo "any other file indirectly affecting the aspect of the manual."
echo "You might want to install the Texinfo package:"
echo "<$gnu_software_URL/texinfo/>"
echo "The spurious makeinfo call might also be the consequence of"
echo "using a buggy 'make' (AIX, DU, IRIX), in which case you might"
echo "want to install GNU make:"
echo "<$gnu_software_URL/make/>"
;;
*)
echo "You might have modified some files without having the proper"
echo "tools for further handling them. Check the 'README' file, it"
echo "often tells you about the needed prerequisites for installing"
echo "this package. You may also peek at any GNU archive site, in"
echo "case some other package contains this missing '$1' program."
;;
esac
}
give_advice "$1" | sed -e '1s/^/WARNING: /' \
-e '2,$s/^/ /' >&2
# Propagate the correct exit status (expected to be 127 for a program
# not found, 63 for a program that failed due to version mismatch).
exit $st
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-time-zone: "UTC"
# time-stamp-end: "; # UTC"
# End:
liblognorm-2.0.6/Makefile.am 0000644 0001750 0001750 00000000364 13273030617 012645 0000000 0000000 SUBDIRS = compat src tools
if ENABLE_DOCS
SUBDIRS += doc
endif
EXTRA_DIST = rulebases \
COPYING.ASL20
pkgconfigdir = $(libdir)/pkgconfig
pkgconfig_DATA = lognorm.pc
ACLOCAL_AMFLAGS = -I m4
if ENABLE_TESTBENCH
SUBDIRS += tests
endif
liblognorm-2.0.6/config.guess 0000755 0001750 0001750 00000126373 13370251154 013141 0000000 0000000 #! /bin/sh
# Attempt to guess a canonical system name.
# Copyright 1992-2018 Free Software Foundation, Inc.
timestamp='2018-02-24'
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see .
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that
# program. This Exception is an additional permission under section 7
# of the GNU General Public License, version 3 ("GPLv3").
#
# Originally written by Per Bothner; maintained since 2000 by Ben Elliston.
#
# You can get the latest version of this script from:
# https://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess
#
# Please send patches to .
me=`echo "$0" | sed -e 's,.*/,,'`
usage="\
Usage: $0 [OPTION]
Output the configuration name of the system \`$me' is run on.
Options:
-h, --help print this help, then exit
-t, --time-stamp print date of last modification, then exit
-v, --version print version number, then exit
Report bugs and patches to ."
version="\
GNU config.guess ($timestamp)
Originally written by Per Bothner.
Copyright 1992-2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
help="
Try \`$me --help' for more information."
# Parse command line
while test $# -gt 0 ; do
case $1 in
--time-stamp | --time* | -t )
echo "$timestamp" ; exit ;;
--version | -v )
echo "$version" ; exit ;;
--help | --h* | -h )
echo "$usage"; exit ;;
-- ) # Stop option processing
shift; break ;;
- ) # Use stdin as input.
break ;;
-* )
echo "$me: invalid option $1$help" >&2
exit 1 ;;
* )
break ;;
esac
done
if test $# != 0; then
echo "$me: too many arguments$help" >&2
exit 1
fi
trap 'exit 1' 1 2 15
# CC_FOR_BUILD -- compiler used by this script. Note that the use of a
# compiler to aid in system detection is discouraged as it requires
# temporary files to be created and, as you can see below, it is a
# headache to deal with in a portable fashion.
# Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still
# use `HOST_CC' if defined, but it is deprecated.
# Portable tmp directory creation inspired by the Autoconf team.
set_cc_for_build='
trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ;
trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ;
: ${TMPDIR=/tmp} ;
{ tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } ||
{ test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } ||
{ tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } ||
{ echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ;
dummy=$tmp/dummy ;
tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ;
case $CC_FOR_BUILD,$HOST_CC,$CC in
,,) echo "int x;" > "$dummy.c" ;
for c in cc gcc c89 c99 ; do
if ($c -c -o "$dummy.o" "$dummy.c") >/dev/null 2>&1 ; then
CC_FOR_BUILD="$c"; break ;
fi ;
done ;
if test x"$CC_FOR_BUILD" = x ; then
CC_FOR_BUILD=no_compiler_found ;
fi
;;
,,*) CC_FOR_BUILD=$CC ;;
,*,*) CC_FOR_BUILD=$HOST_CC ;;
esac ; set_cc_for_build= ;'
# This is needed to find uname on a Pyramid OSx when run in the BSD universe.
# (ghazi@noc.rutgers.edu 1994-08-24)
if (test -f /.attbin/uname) >/dev/null 2>&1 ; then
PATH=$PATH:/.attbin ; export PATH
fi
UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown
UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown
UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown
UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown
case "$UNAME_SYSTEM" in
Linux|GNU|GNU/*)
# If the system lacks a compiler, then just pick glibc.
# We could probably try harder.
LIBC=gnu
eval "$set_cc_for_build"
cat <<-EOF > "$dummy.c"
#include
#if defined(__UCLIBC__)
LIBC=uclibc
#elif defined(__dietlibc__)
LIBC=dietlibc
#else
LIBC=gnu
#endif
EOF
eval "`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^LIBC' | sed 's, ,,g'`"
# If ldd exists, use it to detect musl libc.
if command -v ldd >/dev/null && \
ldd --version 2>&1 | grep -q ^musl
then
LIBC=musl
fi
;;
esac
# Note: order is significant - the case branches are not exclusive.
case "$UNAME_MACHINE:$UNAME_SYSTEM:$UNAME_RELEASE:$UNAME_VERSION" in
*:NetBSD:*:*)
# NetBSD (nbsd) targets should (where applicable) match one or
# more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*,
# *-*-netbsdecoff* and *-*-netbsd*. For targets that recently
# switched to ELF, *-*-netbsd* would select the old
# object file format. This provides both forward
# compatibility and a consistent mechanism for selecting the
# object file format.
#
# Note: NetBSD doesn't particularly care about the vendor
# portion of the name. We always set it to "unknown".
sysctl="sysctl -n hw.machine_arch"
UNAME_MACHINE_ARCH=`(uname -p 2>/dev/null || \
"/sbin/$sysctl" 2>/dev/null || \
"/usr/sbin/$sysctl" 2>/dev/null || \
echo unknown)`
case "$UNAME_MACHINE_ARCH" in
armeb) machine=armeb-unknown ;;
arm*) machine=arm-unknown ;;
sh3el) machine=shl-unknown ;;
sh3eb) machine=sh-unknown ;;
sh5el) machine=sh5le-unknown ;;
earmv*)
arch=`echo "$UNAME_MACHINE_ARCH" | sed -e 's,^e\(armv[0-9]\).*$,\1,'`
endian=`echo "$UNAME_MACHINE_ARCH" | sed -ne 's,^.*\(eb\)$,\1,p'`
machine="${arch}${endian}"-unknown
;;
*) machine="$UNAME_MACHINE_ARCH"-unknown ;;
esac
# The Operating System including object format, if it has switched
# to ELF recently (or will in the future) and ABI.
case "$UNAME_MACHINE_ARCH" in
earm*)
os=netbsdelf
;;
arm*|i386|m68k|ns32k|sh3*|sparc|vax)
eval "$set_cc_for_build"
if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \
| grep -q __ELF__
then
# Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout).
# Return netbsd for either. FIX?
os=netbsd
else
os=netbsdelf
fi
;;
*)
os=netbsd
;;
esac
# Determine ABI tags.
case "$UNAME_MACHINE_ARCH" in
earm*)
expr='s/^earmv[0-9]/-eabi/;s/eb$//'
abi=`echo "$UNAME_MACHINE_ARCH" | sed -e "$expr"`
;;
esac
# The OS release
# Debian GNU/NetBSD machines have a different userland, and
# thus, need a distinct triplet. However, they do not need
# kernel version information, so it can be replaced with a
# suitable tag, in the style of linux-gnu.
case "$UNAME_VERSION" in
Debian*)
release='-gnu'
;;
*)
release=`echo "$UNAME_RELEASE" | sed -e 's/[-_].*//' | cut -d. -f1,2`
;;
esac
# Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM:
# contains redundant information, the shorter form:
# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used.
echo "$machine-${os}${release}${abi}"
exit ;;
*:Bitrig:*:*)
UNAME_MACHINE_ARCH=`arch | sed 's/Bitrig.//'`
echo "$UNAME_MACHINE_ARCH"-unknown-bitrig"$UNAME_RELEASE"
exit ;;
*:OpenBSD:*:*)
UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'`
echo "$UNAME_MACHINE_ARCH"-unknown-openbsd"$UNAME_RELEASE"
exit ;;
*:LibertyBSD:*:*)
UNAME_MACHINE_ARCH=`arch | sed 's/^.*BSD\.//'`
echo "$UNAME_MACHINE_ARCH"-unknown-libertybsd"$UNAME_RELEASE"
exit ;;
*:MidnightBSD:*:*)
echo "$UNAME_MACHINE"-unknown-midnightbsd"$UNAME_RELEASE"
exit ;;
*:ekkoBSD:*:*)
echo "$UNAME_MACHINE"-unknown-ekkobsd"$UNAME_RELEASE"
exit ;;
*:SolidBSD:*:*)
echo "$UNAME_MACHINE"-unknown-solidbsd"$UNAME_RELEASE"
exit ;;
macppc:MirBSD:*:*)
echo powerpc-unknown-mirbsd"$UNAME_RELEASE"
exit ;;
*:MirBSD:*:*)
echo "$UNAME_MACHINE"-unknown-mirbsd"$UNAME_RELEASE"
exit ;;
*:Sortix:*:*)
echo "$UNAME_MACHINE"-unknown-sortix
exit ;;
*:Redox:*:*)
echo "$UNAME_MACHINE"-unknown-redox
exit ;;
mips:OSF1:*.*)
echo mips-dec-osf1
exit ;;
alpha:OSF1:*:*)
case $UNAME_RELEASE in
*4.0)
UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'`
;;
*5.*)
UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'`
;;
esac
# According to Compaq, /usr/sbin/psrinfo has been available on
# OSF/1 and Tru64 systems produced since 1995. I hope that
# covers most systems running today. This code pipes the CPU
# types through head -n 1, so we only detect the type of CPU 0.
ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1`
case "$ALPHA_CPU_TYPE" in
"EV4 (21064)")
UNAME_MACHINE=alpha ;;
"EV4.5 (21064)")
UNAME_MACHINE=alpha ;;
"LCA4 (21066/21068)")
UNAME_MACHINE=alpha ;;
"EV5 (21164)")
UNAME_MACHINE=alphaev5 ;;
"EV5.6 (21164A)")
UNAME_MACHINE=alphaev56 ;;
"EV5.6 (21164PC)")
UNAME_MACHINE=alphapca56 ;;
"EV5.7 (21164PC)")
UNAME_MACHINE=alphapca57 ;;
"EV6 (21264)")
UNAME_MACHINE=alphaev6 ;;
"EV6.7 (21264A)")
UNAME_MACHINE=alphaev67 ;;
"EV6.8CB (21264C)")
UNAME_MACHINE=alphaev68 ;;
"EV6.8AL (21264B)")
UNAME_MACHINE=alphaev68 ;;
"EV6.8CX (21264D)")
UNAME_MACHINE=alphaev68 ;;
"EV6.9A (21264/EV69A)")
UNAME_MACHINE=alphaev69 ;;
"EV7 (21364)")
UNAME_MACHINE=alphaev7 ;;
"EV7.9 (21364A)")
UNAME_MACHINE=alphaev79 ;;
esac
# A Pn.n version is a patched version.
# A Vn.n version is a released version.
# A Tn.n version is a released field test version.
# A Xn.n version is an unreleased experimental baselevel.
# 1.2 uses "1.2" for uname -r.
echo "$UNAME_MACHINE"-dec-osf"`echo "$UNAME_RELEASE" | sed -e 's/^[PVTX]//' | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz`"
# Reset EXIT trap before exiting to avoid spurious non-zero exit code.
exitcode=$?
trap '' 0
exit $exitcode ;;
Amiga*:UNIX_System_V:4.0:*)
echo m68k-unknown-sysv4
exit ;;
*:[Aa]miga[Oo][Ss]:*:*)
echo "$UNAME_MACHINE"-unknown-amigaos
exit ;;
*:[Mm]orph[Oo][Ss]:*:*)
echo "$UNAME_MACHINE"-unknown-morphos
exit ;;
*:OS/390:*:*)
echo i370-ibm-openedition
exit ;;
*:z/VM:*:*)
echo s390-ibm-zvmoe
exit ;;
*:OS400:*:*)
echo powerpc-ibm-os400
exit ;;
arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*)
echo arm-acorn-riscix"$UNAME_RELEASE"
exit ;;
arm*:riscos:*:*|arm*:RISCOS:*:*)
echo arm-unknown-riscos
exit ;;
SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*)
echo hppa1.1-hitachi-hiuxmpp
exit ;;
Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*)
# akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE.
if test "`(/bin/universe) 2>/dev/null`" = att ; then
echo pyramid-pyramid-sysv3
else
echo pyramid-pyramid-bsd
fi
exit ;;
NILE*:*:*:dcosx)
echo pyramid-pyramid-svr4
exit ;;
DRS?6000:unix:4.0:6*)
echo sparc-icl-nx6
exit ;;
DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*)
case `/usr/bin/uname -p` in
sparc) echo sparc-icl-nx7; exit ;;
esac ;;
s390x:SunOS:*:*)
echo "$UNAME_MACHINE"-ibm-solaris2"`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`"
exit ;;
sun4H:SunOS:5.*:*)
echo sparc-hal-solaris2"`echo "$UNAME_RELEASE"|sed -e 's/[^.]*//'`"
exit ;;
sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*)
echo sparc-sun-solaris2"`echo "$UNAME_RELEASE" | sed -e 's/[^.]*//'`"
exit ;;
i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*)
echo i386-pc-auroraux"$UNAME_RELEASE"
exit ;;
i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*)
eval "$set_cc_for_build"
SUN_ARCH=i386
# If there is a compiler, see if it is configured for 64-bit objects.
# Note that the Sun cc does not turn __LP64__ into 1 like gcc does.
# This test works for both compilers.
if [ "$CC_FOR_BUILD" != no_compiler_found ]; then
if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \
(CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \
grep IS_64BIT_ARCH >/dev/null
then
SUN_ARCH=x86_64
fi
fi
echo "$SUN_ARCH"-pc-solaris2"`echo "$UNAME_RELEASE"|sed -e 's/[^.]*//'`"
exit ;;
sun4*:SunOS:6*:*)
# According to config.sub, this is the proper way to canonicalize
# SunOS6. Hard to guess exactly what SunOS6 will be like, but
# it's likely to be more like Solaris than SunOS4.
echo sparc-sun-solaris3"`echo "$UNAME_RELEASE"|sed -e 's/[^.]*//'`"
exit ;;
sun4*:SunOS:*:*)
case "`/usr/bin/arch -k`" in
Series*|S4*)
UNAME_RELEASE=`uname -v`
;;
esac
# Japanese Language versions have a version number like `4.1.3-JL'.
echo sparc-sun-sunos"`echo "$UNAME_RELEASE"|sed -e 's/-/_/'`"
exit ;;
sun3*:SunOS:*:*)
echo m68k-sun-sunos"$UNAME_RELEASE"
exit ;;
sun*:*:4.2BSD:*)
UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null`
test "x$UNAME_RELEASE" = x && UNAME_RELEASE=3
case "`/bin/arch`" in
sun3)
echo m68k-sun-sunos"$UNAME_RELEASE"
;;
sun4)
echo sparc-sun-sunos"$UNAME_RELEASE"
;;
esac
exit ;;
aushp:SunOS:*:*)
echo sparc-auspex-sunos"$UNAME_RELEASE"
exit ;;
# The situation for MiNT is a little confusing. The machine name
# can be virtually everything (everything which is not
# "atarist" or "atariste" at least should have a processor
# > m68000). The system name ranges from "MiNT" over "FreeMiNT"
# to the lowercase version "mint" (or "freemint"). Finally
# the system name "TOS" denotes a system which is actually not
# MiNT. But MiNT is downward compatible to TOS, so this should
# be no problem.
atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*)
echo m68k-atari-mint"$UNAME_RELEASE"
exit ;;
atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*)
echo m68k-atari-mint"$UNAME_RELEASE"
exit ;;
*falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*)
echo m68k-atari-mint"$UNAME_RELEASE"
exit ;;
milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*)
echo m68k-milan-mint"$UNAME_RELEASE"
exit ;;
hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*)
echo m68k-hades-mint"$UNAME_RELEASE"
exit ;;
*:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*)
echo m68k-unknown-mint"$UNAME_RELEASE"
exit ;;
m68k:machten:*:*)
echo m68k-apple-machten"$UNAME_RELEASE"
exit ;;
powerpc:machten:*:*)
echo powerpc-apple-machten"$UNAME_RELEASE"
exit ;;
RISC*:Mach:*:*)
echo mips-dec-mach_bsd4.3
exit ;;
RISC*:ULTRIX:*:*)
echo mips-dec-ultrix"$UNAME_RELEASE"
exit ;;
VAX*:ULTRIX*:*:*)
echo vax-dec-ultrix"$UNAME_RELEASE"
exit ;;
2020:CLIX:*:* | 2430:CLIX:*:*)
echo clipper-intergraph-clix"$UNAME_RELEASE"
exit ;;
mips:*:*:UMIPS | mips:*:*:RISCos)
eval "$set_cc_for_build"
sed 's/^ //' << EOF > "$dummy.c"
#ifdef __cplusplus
#include /* for printf() prototype */
int main (int argc, char *argv[]) {
#else
int main (argc, argv) int argc; char *argv[]; {
#endif
#if defined (host_mips) && defined (MIPSEB)
#if defined (SYSTYPE_SYSV)
printf ("mips-mips-riscos%ssysv\\n", argv[1]); exit (0);
#endif
#if defined (SYSTYPE_SVR4)
printf ("mips-mips-riscos%ssvr4\\n", argv[1]); exit (0);
#endif
#if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD)
printf ("mips-mips-riscos%sbsd\\n", argv[1]); exit (0);
#endif
#endif
exit (-1);
}
EOF
$CC_FOR_BUILD -o "$dummy" "$dummy.c" &&
dummyarg=`echo "$UNAME_RELEASE" | sed -n 's/\([0-9]*\).*/\1/p'` &&
SYSTEM_NAME=`"$dummy" "$dummyarg"` &&
{ echo "$SYSTEM_NAME"; exit; }
echo mips-mips-riscos"$UNAME_RELEASE"
exit ;;
Motorola:PowerMAX_OS:*:*)
echo powerpc-motorola-powermax
exit ;;
Motorola:*:4.3:PL8-*)
echo powerpc-harris-powermax
exit ;;
Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*)
echo powerpc-harris-powermax
exit ;;
Night_Hawk:Power_UNIX:*:*)
echo powerpc-harris-powerunix
exit ;;
m88k:CX/UX:7*:*)
echo m88k-harris-cxux7
exit ;;
m88k:*:4*:R4*)
echo m88k-motorola-sysv4
exit ;;
m88k:*:3*:R3*)
echo m88k-motorola-sysv3
exit ;;
AViiON:dgux:*:*)
# DG/UX returns AViiON for all architectures
UNAME_PROCESSOR=`/usr/bin/uname -p`
if [ "$UNAME_PROCESSOR" = mc88100 ] || [ "$UNAME_PROCESSOR" = mc88110 ]
then
if [ "$TARGET_BINARY_INTERFACE"x = m88kdguxelfx ] || \
[ "$TARGET_BINARY_INTERFACE"x = x ]
then
echo m88k-dg-dgux"$UNAME_RELEASE"
else
echo m88k-dg-dguxbcs"$UNAME_RELEASE"
fi
else
echo i586-dg-dgux"$UNAME_RELEASE"
fi
exit ;;
M88*:DolphinOS:*:*) # DolphinOS (SVR3)
echo m88k-dolphin-sysv3
exit ;;
M88*:*:R3*:*)
# Delta 88k system running SVR3
echo m88k-motorola-sysv3
exit ;;
XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3)
echo m88k-tektronix-sysv3
exit ;;
Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD)
echo m68k-tektronix-bsd
exit ;;
*:IRIX*:*:*)
echo mips-sgi-irix"`echo "$UNAME_RELEASE"|sed -e 's/-/_/g'`"
exit ;;
????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX.
echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id
exit ;; # Note that: echo "'`uname -s`'" gives 'AIX '
i*86:AIX:*:*)
echo i386-ibm-aix
exit ;;
ia64:AIX:*:*)
if [ -x /usr/bin/oslevel ] ; then
IBM_REV=`/usr/bin/oslevel`
else
IBM_REV="$UNAME_VERSION.$UNAME_RELEASE"
fi
echo "$UNAME_MACHINE"-ibm-aix"$IBM_REV"
exit ;;
*:AIX:2:3)
if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then
eval "$set_cc_for_build"
sed 's/^ //' << EOF > "$dummy.c"
#include
main()
{
if (!__power_pc())
exit(1);
puts("powerpc-ibm-aix3.2.5");
exit(0);
}
EOF
if $CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"`
then
echo "$SYSTEM_NAME"
else
echo rs6000-ibm-aix3.2.5
fi
elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then
echo rs6000-ibm-aix3.2.4
else
echo rs6000-ibm-aix3.2
fi
exit ;;
*:AIX:*:[4567])
IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'`
if /usr/sbin/lsattr -El "$IBM_CPU_ID" | grep ' POWER' >/dev/null 2>&1; then
IBM_ARCH=rs6000
else
IBM_ARCH=powerpc
fi
if [ -x /usr/bin/lslpp ] ; then
IBM_REV=`/usr/bin/lslpp -Lqc bos.rte.libc |
awk -F: '{ print $3 }' | sed s/[0-9]*$/0/`
else
IBM_REV="$UNAME_VERSION.$UNAME_RELEASE"
fi
echo "$IBM_ARCH"-ibm-aix"$IBM_REV"
exit ;;
*:AIX:*:*)
echo rs6000-ibm-aix
exit ;;
ibmrt:4.4BSD:*|romp-ibm:4.4BSD:*)
echo romp-ibm-bsd4.4
exit ;;
ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and
echo romp-ibm-bsd"$UNAME_RELEASE" # 4.3 with uname added to
exit ;; # report: romp-ibm BSD 4.3
*:BOSX:*:*)
echo rs6000-bull-bosx
exit ;;
DPX/2?00:B.O.S.:*:*)
echo m68k-bull-sysv3
exit ;;
9000/[34]??:4.3bsd:1.*:*)
echo m68k-hp-bsd
exit ;;
hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*)
echo m68k-hp-bsd4.4
exit ;;
9000/[34678]??:HP-UX:*:*)
HPUX_REV=`echo "$UNAME_RELEASE"|sed -e 's/[^.]*.[0B]*//'`
case "$UNAME_MACHINE" in
9000/31?) HP_ARCH=m68000 ;;
9000/[34]??) HP_ARCH=m68k ;;
9000/[678][0-9][0-9])
if [ -x /usr/bin/getconf ]; then
sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null`
sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null`
case "$sc_cpu_version" in
523) HP_ARCH=hppa1.0 ;; # CPU_PA_RISC1_0
528) HP_ARCH=hppa1.1 ;; # CPU_PA_RISC1_1
532) # CPU_PA_RISC2_0
case "$sc_kernel_bits" in
32) HP_ARCH=hppa2.0n ;;
64) HP_ARCH=hppa2.0w ;;
'') HP_ARCH=hppa2.0 ;; # HP-UX 10.20
esac ;;
esac
fi
if [ "$HP_ARCH" = "" ]; then
eval "$set_cc_for_build"
sed 's/^ //' << EOF > "$dummy.c"
#define _HPUX_SOURCE
#include
#include
int main ()
{
#if defined(_SC_KERNEL_BITS)
long bits = sysconf(_SC_KERNEL_BITS);
#endif
long cpu = sysconf (_SC_CPU_VERSION);
switch (cpu)
{
case CPU_PA_RISC1_0: puts ("hppa1.0"); break;
case CPU_PA_RISC1_1: puts ("hppa1.1"); break;
case CPU_PA_RISC2_0:
#if defined(_SC_KERNEL_BITS)
switch (bits)
{
case 64: puts ("hppa2.0w"); break;
case 32: puts ("hppa2.0n"); break;
default: puts ("hppa2.0"); break;
} break;
#else /* !defined(_SC_KERNEL_BITS) */
puts ("hppa2.0"); break;
#endif
default: puts ("hppa1.0"); break;
}
exit (0);
}
EOF
(CCOPTS="" $CC_FOR_BUILD -o "$dummy" "$dummy.c" 2>/dev/null) && HP_ARCH=`"$dummy"`
test -z "$HP_ARCH" && HP_ARCH=hppa
fi ;;
esac
if [ "$HP_ARCH" = hppa2.0w ]
then
eval "$set_cc_for_build"
# hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating
# 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler
# generating 64-bit code. GNU and HP use different nomenclature:
#
# $ CC_FOR_BUILD=cc ./config.guess
# => hppa2.0w-hp-hpux11.23
# $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess
# => hppa64-hp-hpux11.23
if echo __LP64__ | (CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) |
grep -q __LP64__
then
HP_ARCH=hppa2.0w
else
HP_ARCH=hppa64
fi
fi
echo "$HP_ARCH"-hp-hpux"$HPUX_REV"
exit ;;
ia64:HP-UX:*:*)
HPUX_REV=`echo "$UNAME_RELEASE"|sed -e 's/[^.]*.[0B]*//'`
echo ia64-hp-hpux"$HPUX_REV"
exit ;;
3050*:HI-UX:*:*)
eval "$set_cc_for_build"
sed 's/^ //' << EOF > "$dummy.c"
#include
int
main ()
{
long cpu = sysconf (_SC_CPU_VERSION);
/* The order matters, because CPU_IS_HP_MC68K erroneously returns
true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct
results, however. */
if (CPU_IS_PA_RISC (cpu))
{
switch (cpu)
{
case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break;
case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break;
case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break;
default: puts ("hppa-hitachi-hiuxwe2"); break;
}
}
else if (CPU_IS_HP_MC68K (cpu))
puts ("m68k-hitachi-hiuxwe2");
else puts ("unknown-hitachi-hiuxwe2");
exit (0);
}
EOF
$CC_FOR_BUILD -o "$dummy" "$dummy.c" && SYSTEM_NAME=`"$dummy"` &&
{ echo "$SYSTEM_NAME"; exit; }
echo unknown-hitachi-hiuxwe2
exit ;;
9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:*)
echo hppa1.1-hp-bsd
exit ;;
9000/8??:4.3bsd:*:*)
echo hppa1.0-hp-bsd
exit ;;
*9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*)
echo hppa1.0-hp-mpeix
exit ;;
hp7??:OSF1:*:* | hp8?[79]:OSF1:*:*)
echo hppa1.1-hp-osf
exit ;;
hp8??:OSF1:*:*)
echo hppa1.0-hp-osf
exit ;;
i*86:OSF1:*:*)
if [ -x /usr/sbin/sysversion ] ; then
echo "$UNAME_MACHINE"-unknown-osf1mk
else
echo "$UNAME_MACHINE"-unknown-osf1
fi
exit ;;
parisc*:Lites*:*:*)
echo hppa1.1-hp-lites
exit ;;
C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*)
echo c1-convex-bsd
exit ;;
C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*)
if getsysinfo -f scalar_acc
then echo c32-convex-bsd
else echo c2-convex-bsd
fi
exit ;;
C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*)
echo c34-convex-bsd
exit ;;
C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*)
echo c38-convex-bsd
exit ;;
C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*)
echo c4-convex-bsd
exit ;;
CRAY*Y-MP:*:*:*)
echo ymp-cray-unicos"$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'
exit ;;
CRAY*[A-Z]90:*:*:*)
echo "$UNAME_MACHINE"-cray-unicos"$UNAME_RELEASE" \
| sed -e 's/CRAY.*\([A-Z]90\)/\1/' \
-e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \
-e 's/\.[^.]*$/.X/'
exit ;;
CRAY*TS:*:*:*)
echo t90-cray-unicos"$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'
exit ;;
CRAY*T3E:*:*:*)
echo alphaev5-cray-unicosmk"$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'
exit ;;
CRAY*SV1:*:*:*)
echo sv1-cray-unicos"$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'
exit ;;
*:UNICOS/mp:*:*)
echo craynv-cray-unicosmp"$UNAME_RELEASE" | sed -e 's/\.[^.]*$/.X/'
exit ;;
F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*)
FUJITSU_PROC=`uname -m | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz`
FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'`
FUJITSU_REL=`echo "$UNAME_RELEASE" | sed -e 's/ /_/'`
echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
exit ;;
5000:UNIX_System_V:4.*:*)
FUJITSU_SYS=`uname -p | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/\///'`
FUJITSU_REL=`echo "$UNAME_RELEASE" | tr ABCDEFGHIJKLMNOPQRSTUVWXYZ abcdefghijklmnopqrstuvwxyz | sed -e 's/ /_/'`
echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}"
exit ;;
i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*)
echo "$UNAME_MACHINE"-pc-bsdi"$UNAME_RELEASE"
exit ;;
sparc*:BSD/OS:*:*)
echo sparc-unknown-bsdi"$UNAME_RELEASE"
exit ;;
*:BSD/OS:*:*)
echo "$UNAME_MACHINE"-unknown-bsdi"$UNAME_RELEASE"
exit ;;
*:FreeBSD:*:*)
UNAME_PROCESSOR=`/usr/bin/uname -p`
case "$UNAME_PROCESSOR" in
amd64)
UNAME_PROCESSOR=x86_64 ;;
i386)
UNAME_PROCESSOR=i586 ;;
esac
echo "$UNAME_PROCESSOR"-unknown-freebsd"`echo "$UNAME_RELEASE"|sed -e 's/[-(].*//'`"
exit ;;
i*:CYGWIN*:*)
echo "$UNAME_MACHINE"-pc-cygwin
exit ;;
*:MINGW64*:*)
echo "$UNAME_MACHINE"-pc-mingw64
exit ;;
*:MINGW*:*)
echo "$UNAME_MACHINE"-pc-mingw32
exit ;;
*:MSYS*:*)
echo "$UNAME_MACHINE"-pc-msys
exit ;;
i*:PW*:*)
echo "$UNAME_MACHINE"-pc-pw32
exit ;;
*:Interix*:*)
case "$UNAME_MACHINE" in
x86)
echo i586-pc-interix"$UNAME_RELEASE"
exit ;;
authenticamd | genuineintel | EM64T)
echo x86_64-unknown-interix"$UNAME_RELEASE"
exit ;;
IA64)
echo ia64-unknown-interix"$UNAME_RELEASE"
exit ;;
esac ;;
i*:UWIN*:*)
echo "$UNAME_MACHINE"-pc-uwin
exit ;;
amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*)
echo x86_64-unknown-cygwin
exit ;;
prep*:SunOS:5.*:*)
echo powerpcle-unknown-solaris2"`echo "$UNAME_RELEASE"|sed -e 's/[^.]*//'`"
exit ;;
*:GNU:*:*)
# the GNU system
echo "`echo "$UNAME_MACHINE"|sed -e 's,[-/].*$,,'`-unknown-$LIBC`echo "$UNAME_RELEASE"|sed -e 's,/.*$,,'`"
exit ;;
*:GNU/*:*:*)
# other systems with GNU libc and userland
echo "$UNAME_MACHINE-unknown-`echo "$UNAME_SYSTEM" | sed 's,^[^/]*/,,' | tr "[:upper:]" "[:lower:]"``echo "$UNAME_RELEASE"|sed -e 's/[-(].*//'`-$LIBC"
exit ;;
i*86:Minix:*:*)
echo "$UNAME_MACHINE"-pc-minix
exit ;;
aarch64:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
aarch64_be:Linux:*:*)
UNAME_MACHINE=aarch64_be
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
alpha:Linux:*:*)
case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in
EV5) UNAME_MACHINE=alphaev5 ;;
EV56) UNAME_MACHINE=alphaev56 ;;
PCA56) UNAME_MACHINE=alphapca56 ;;
PCA57) UNAME_MACHINE=alphapca56 ;;
EV6) UNAME_MACHINE=alphaev6 ;;
EV67) UNAME_MACHINE=alphaev67 ;;
EV68*) UNAME_MACHINE=alphaev68 ;;
esac
objdump --private-headers /bin/sh | grep -q ld.so.1
if test "$?" = 0 ; then LIBC=gnulibc1 ; fi
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
arc:Linux:*:* | arceb:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
arm*:Linux:*:*)
eval "$set_cc_for_build"
if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \
| grep -q __ARM_EABI__
then
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
else
if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \
| grep -q __ARM_PCS_VFP
then
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"eabi
else
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"eabihf
fi
fi
exit ;;
avr32*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
cris:Linux:*:*)
echo "$UNAME_MACHINE"-axis-linux-"$LIBC"
exit ;;
crisv32:Linux:*:*)
echo "$UNAME_MACHINE"-axis-linux-"$LIBC"
exit ;;
e2k:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
frv:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
hexagon:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
i*86:Linux:*:*)
echo "$UNAME_MACHINE"-pc-linux-"$LIBC"
exit ;;
ia64:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
k1om:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
m32r*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
m68*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
mips:Linux:*:* | mips64:Linux:*:*)
eval "$set_cc_for_build"
sed 's/^ //' << EOF > "$dummy.c"
#undef CPU
#undef ${UNAME_MACHINE}
#undef ${UNAME_MACHINE}el
#if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL)
CPU=${UNAME_MACHINE}el
#else
#if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB)
CPU=${UNAME_MACHINE}
#else
CPU=
#endif
#endif
EOF
eval "`$CC_FOR_BUILD -E "$dummy.c" 2>/dev/null | grep '^CPU'`"
test "x$CPU" != x && { echo "$CPU-unknown-linux-$LIBC"; exit; }
;;
mips64el:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
openrisc*:Linux:*:*)
echo or1k-unknown-linux-"$LIBC"
exit ;;
or32:Linux:*:* | or1k*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
padre:Linux:*:*)
echo sparc-unknown-linux-"$LIBC"
exit ;;
parisc64:Linux:*:* | hppa64:Linux:*:*)
echo hppa64-unknown-linux-"$LIBC"
exit ;;
parisc:Linux:*:* | hppa:Linux:*:*)
# Look for CPU level
case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in
PA7*) echo hppa1.1-unknown-linux-"$LIBC" ;;
PA8*) echo hppa2.0-unknown-linux-"$LIBC" ;;
*) echo hppa-unknown-linux-"$LIBC" ;;
esac
exit ;;
ppc64:Linux:*:*)
echo powerpc64-unknown-linux-"$LIBC"
exit ;;
ppc:Linux:*:*)
echo powerpc-unknown-linux-"$LIBC"
exit ;;
ppc64le:Linux:*:*)
echo powerpc64le-unknown-linux-"$LIBC"
exit ;;
ppcle:Linux:*:*)
echo powerpcle-unknown-linux-"$LIBC"
exit ;;
riscv32:Linux:*:* | riscv64:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
s390:Linux:*:* | s390x:Linux:*:*)
echo "$UNAME_MACHINE"-ibm-linux-"$LIBC"
exit ;;
sh64*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
sh*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
sparc:Linux:*:* | sparc64:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
tile*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
vax:Linux:*:*)
echo "$UNAME_MACHINE"-dec-linux-"$LIBC"
exit ;;
x86_64:Linux:*:*)
if objdump -f /bin/sh | grep -q elf32-x86-64; then
echo "$UNAME_MACHINE"-pc-linux-"$LIBC"x32
else
echo "$UNAME_MACHINE"-pc-linux-"$LIBC"
fi
exit ;;
xtensa*:Linux:*:*)
echo "$UNAME_MACHINE"-unknown-linux-"$LIBC"
exit ;;
i*86:DYNIX/ptx:4*:*)
# ptx 4.0 does uname -s correctly, with DYNIX/ptx in there.
# earlier versions are messed up and put the nodename in both
# sysname and nodename.
echo i386-sequent-sysv4
exit ;;
i*86:UNIX_SV:4.2MP:2.*)
# Unixware is an offshoot of SVR4, but it has its own version
# number series starting with 2...
# I am not positive that other SVR4 systems won't match this,
# I just have to hope. -- rms.
# Use sysv4.2uw... so that sysv4* matches it.
echo "$UNAME_MACHINE"-pc-sysv4.2uw"$UNAME_VERSION"
exit ;;
i*86:OS/2:*:*)
# If we were able to find `uname', then EMX Unix compatibility
# is probably installed.
echo "$UNAME_MACHINE"-pc-os2-emx
exit ;;
i*86:XTS-300:*:STOP)
echo "$UNAME_MACHINE"-unknown-stop
exit ;;
i*86:atheos:*:*)
echo "$UNAME_MACHINE"-unknown-atheos
exit ;;
i*86:syllable:*:*)
echo "$UNAME_MACHINE"-pc-syllable
exit ;;
i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*)
echo i386-unknown-lynxos"$UNAME_RELEASE"
exit ;;
i*86:*DOS:*:*)
echo "$UNAME_MACHINE"-pc-msdosdjgpp
exit ;;
i*86:*:4.*:*)
UNAME_REL=`echo "$UNAME_RELEASE" | sed 's/\/MP$//'`
if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then
echo "$UNAME_MACHINE"-univel-sysv"$UNAME_REL"
else
echo "$UNAME_MACHINE"-pc-sysv"$UNAME_REL"
fi
exit ;;
i*86:*:5:[678]*)
# UnixWare 7.x, OpenUNIX and OpenServer 6.
case `/bin/uname -X | grep "^Machine"` in
*486*) UNAME_MACHINE=i486 ;;
*Pentium) UNAME_MACHINE=i586 ;;
*Pent*|*Celeron) UNAME_MACHINE=i686 ;;
esac
echo "$UNAME_MACHINE-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}{$UNAME_VERSION}"
exit ;;
i*86:*:3.2:*)
if test -f /usr/options/cb.name; then
UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then
UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')`
(/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486
(/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \
&& UNAME_MACHINE=i586
(/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \
&& UNAME_MACHINE=i686
(/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \
&& UNAME_MACHINE=i686
echo "$UNAME_MACHINE"-pc-sco"$UNAME_REL"
else
echo "$UNAME_MACHINE"-pc-sysv32
fi
exit ;;
pc:*:*:*)
# Left here for compatibility:
# uname -m prints for DJGPP always 'pc', but it prints nothing about
# the processor, so we play safe by assuming i586.
# Note: whatever this is, it MUST be the same as what config.sub
# prints for the "djgpp" host, or else GDB configure will decide that
# this is a cross-build.
echo i586-pc-msdosdjgpp
exit ;;
Intel:Mach:3*:*)
echo i386-pc-mach3
exit ;;
paragon:*:*:*)
echo i860-intel-osf1
exit ;;
i860:*:4.*:*) # i860-SVR4
if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then
echo i860-stardent-sysv"$UNAME_RELEASE" # Stardent Vistra i860-SVR4
else # Add other i860-SVR4 vendors below as they are discovered.
echo i860-unknown-sysv"$UNAME_RELEASE" # Unknown i860-SVR4
fi
exit ;;
mini*:CTIX:SYS*5:*)
# "miniframe"
echo m68010-convergent-sysv
exit ;;
mc68k:UNIX:SYSTEM5:3.51m)
echo m68k-convergent-sysv
exit ;;
M680?0:D-NIX:5.3:*)
echo m68k-diab-dnix
exit ;;
M68*:*:R3V[5678]*:*)
test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;;
3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0)
OS_REL=''
test -r /etc/.relid \
&& OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid`
/bin/uname -p 2>/dev/null | grep 86 >/dev/null \
&& { echo i486-ncr-sysv4.3"$OS_REL"; exit; }
/bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \
&& { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;;
3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*)
/bin/uname -p 2>/dev/null | grep 86 >/dev/null \
&& { echo i486-ncr-sysv4; exit; } ;;
NCR*:*:4.2:* | MPRAS*:*:4.2:*)
OS_REL='.3'
test -r /etc/.relid \
&& OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid`
/bin/uname -p 2>/dev/null | grep 86 >/dev/null \
&& { echo i486-ncr-sysv4.3"$OS_REL"; exit; }
/bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \
&& { echo i586-ncr-sysv4.3"$OS_REL"; exit; }
/bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \
&& { echo i586-ncr-sysv4.3"$OS_REL"; exit; } ;;
m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*)
echo m68k-unknown-lynxos"$UNAME_RELEASE"
exit ;;
mc68030:UNIX_System_V:4.*:*)
echo m68k-atari-sysv4
exit ;;
TSUNAMI:LynxOS:2.*:*)
echo sparc-unknown-lynxos"$UNAME_RELEASE"
exit ;;
rs6000:LynxOS:2.*:*)
echo rs6000-unknown-lynxos"$UNAME_RELEASE"
exit ;;
PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*)
echo powerpc-unknown-lynxos"$UNAME_RELEASE"
exit ;;
SM[BE]S:UNIX_SV:*:*)
echo mips-dde-sysv"$UNAME_RELEASE"
exit ;;
RM*:ReliantUNIX-*:*:*)
echo mips-sni-sysv4
exit ;;
RM*:SINIX-*:*:*)
echo mips-sni-sysv4
exit ;;
*:SINIX-*:*:*)
if uname -p 2>/dev/null >/dev/null ; then
UNAME_MACHINE=`(uname -p) 2>/dev/null`
echo "$UNAME_MACHINE"-sni-sysv4
else
echo ns32k-sni-sysv
fi
exit ;;
PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort
# says
echo i586-unisys-sysv4
exit ;;
*:UNIX_System_V:4*:FTX*)
# From Gerald Hewes .
# How about differentiating between stratus architectures? -djm
echo hppa1.1-stratus-sysv4
exit ;;
*:*:*:FTX*)
# From seanf@swdc.stratus.com.
echo i860-stratus-sysv4
exit ;;
i*86:VOS:*:*)
# From Paul.Green@stratus.com.
echo "$UNAME_MACHINE"-stratus-vos
exit ;;
*:VOS:*:*)
# From Paul.Green@stratus.com.
echo hppa1.1-stratus-vos
exit ;;
mc68*:A/UX:*:*)
echo m68k-apple-aux"$UNAME_RELEASE"
exit ;;
news*:NEWS-OS:6*:*)
echo mips-sony-newsos6
exit ;;
R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*)
if [ -d /usr/nec ]; then
echo mips-nec-sysv"$UNAME_RELEASE"
else
echo mips-unknown-sysv"$UNAME_RELEASE"
fi
exit ;;
BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only.
echo powerpc-be-beos
exit ;;
BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only.
echo powerpc-apple-beos
exit ;;
BePC:BeOS:*:*) # BeOS running on Intel PC compatible.
echo i586-pc-beos
exit ;;
BePC:Haiku:*:*) # Haiku running on Intel PC compatible.
echo i586-pc-haiku
exit ;;
x86_64:Haiku:*:*)
echo x86_64-unknown-haiku
exit ;;
SX-4:SUPER-UX:*:*)
echo sx4-nec-superux"$UNAME_RELEASE"
exit ;;
SX-5:SUPER-UX:*:*)
echo sx5-nec-superux"$UNAME_RELEASE"
exit ;;
SX-6:SUPER-UX:*:*)
echo sx6-nec-superux"$UNAME_RELEASE"
exit ;;
SX-7:SUPER-UX:*:*)
echo sx7-nec-superux"$UNAME_RELEASE"
exit ;;
SX-8:SUPER-UX:*:*)
echo sx8-nec-superux"$UNAME_RELEASE"
exit ;;
SX-8R:SUPER-UX:*:*)
echo sx8r-nec-superux"$UNAME_RELEASE"
exit ;;
SX-ACE:SUPER-UX:*:*)
echo sxace-nec-superux"$UNAME_RELEASE"
exit ;;
Power*:Rhapsody:*:*)
echo powerpc-apple-rhapsody"$UNAME_RELEASE"
exit ;;
*:Rhapsody:*:*)
echo "$UNAME_MACHINE"-apple-rhapsody"$UNAME_RELEASE"
exit ;;
*:Darwin:*:*)
UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown
eval "$set_cc_for_build"
if test "$UNAME_PROCESSOR" = unknown ; then
UNAME_PROCESSOR=powerpc
fi
if test "`echo "$UNAME_RELEASE" | sed -e 's/\..*//'`" -le 10 ; then
if [ "$CC_FOR_BUILD" != no_compiler_found ]; then
if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \
(CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \
grep IS_64BIT_ARCH >/dev/null
then
case $UNAME_PROCESSOR in
i386) UNAME_PROCESSOR=x86_64 ;;
powerpc) UNAME_PROCESSOR=powerpc64 ;;
esac
fi
# On 10.4-10.6 one might compile for PowerPC via gcc -arch ppc
if (echo '#ifdef __POWERPC__'; echo IS_PPC; echo '#endif') | \
(CCOPTS="" $CC_FOR_BUILD -E - 2>/dev/null) | \
grep IS_PPC >/dev/null
then
UNAME_PROCESSOR=powerpc
fi
fi
elif test "$UNAME_PROCESSOR" = i386 ; then
# Avoid executing cc on OS X 10.9, as it ships with a stub
# that puts up a graphical alert prompting to install
# developer tools. Any system running Mac OS X 10.7 or
# later (Darwin 11 and later) is required to have a 64-bit
# processor. This is not true of the ARM version of Darwin
# that Apple uses in portable devices.
UNAME_PROCESSOR=x86_64
fi
echo "$UNAME_PROCESSOR"-apple-darwin"$UNAME_RELEASE"
exit ;;
*:procnto*:*:* | *:QNX:[0123456789]*:*)
UNAME_PROCESSOR=`uname -p`
if test "$UNAME_PROCESSOR" = x86; then
UNAME_PROCESSOR=i386
UNAME_MACHINE=pc
fi
echo "$UNAME_PROCESSOR"-"$UNAME_MACHINE"-nto-qnx"$UNAME_RELEASE"
exit ;;
*:QNX:*:4*)
echo i386-pc-qnx
exit ;;
NEO-*:NONSTOP_KERNEL:*:*)
echo neo-tandem-nsk"$UNAME_RELEASE"
exit ;;
NSE-*:NONSTOP_KERNEL:*:*)
echo nse-tandem-nsk"$UNAME_RELEASE"
exit ;;
NSR-*:NONSTOP_KERNEL:*:*)
echo nsr-tandem-nsk"$UNAME_RELEASE"
exit ;;
NSV-*:NONSTOP_KERNEL:*:*)
echo nsv-tandem-nsk"$UNAME_RELEASE"
exit ;;
NSX-*:NONSTOP_KERNEL:*:*)
echo nsx-tandem-nsk"$UNAME_RELEASE"
exit ;;
*:NonStop-UX:*:*)
echo mips-compaq-nonstopux
exit ;;
BS2000:POSIX*:*:*)
echo bs2000-siemens-sysv
exit ;;
DS/*:UNIX_System_V:*:*)
echo "$UNAME_MACHINE"-"$UNAME_SYSTEM"-"$UNAME_RELEASE"
exit ;;
*:Plan9:*:*)
# "uname -m" is not consistent, so use $cputype instead. 386
# is converted to i386 for consistency with other x86
# operating systems.
if test "$cputype" = 386; then
UNAME_MACHINE=i386
else
UNAME_MACHINE="$cputype"
fi
echo "$UNAME_MACHINE"-unknown-plan9
exit ;;
*:TOPS-10:*:*)
echo pdp10-unknown-tops10
exit ;;
*:TENEX:*:*)
echo pdp10-unknown-tenex
exit ;;
KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*)
echo pdp10-dec-tops20
exit ;;
XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*)
echo pdp10-xkl-tops20
exit ;;
*:TOPS-20:*:*)
echo pdp10-unknown-tops20
exit ;;
*:ITS:*:*)
echo pdp10-unknown-its
exit ;;
SEI:*:*:SEIUX)
echo mips-sei-seiux"$UNAME_RELEASE"
exit ;;
*:DragonFly:*:*)
echo "$UNAME_MACHINE"-unknown-dragonfly"`echo "$UNAME_RELEASE"|sed -e 's/[-(].*//'`"
exit ;;
*:*VMS:*:*)
UNAME_MACHINE=`(uname -p) 2>/dev/null`
case "$UNAME_MACHINE" in
A*) echo alpha-dec-vms ; exit ;;
I*) echo ia64-dec-vms ; exit ;;
V*) echo vax-dec-vms ; exit ;;
esac ;;
*:XENIX:*:SysV)
echo i386-pc-xenix
exit ;;
i*86:skyos:*:*)
echo "$UNAME_MACHINE"-pc-skyos"`echo "$UNAME_RELEASE" | sed -e 's/ .*$//'`"
exit ;;
i*86:rdos:*:*)
echo "$UNAME_MACHINE"-pc-rdos
exit ;;
i*86:AROS:*:*)
echo "$UNAME_MACHINE"-pc-aros
exit ;;
x86_64:VMkernel:*:*)
echo "$UNAME_MACHINE"-unknown-esx
exit ;;
amd64:Isilon\ OneFS:*:*)
echo x86_64-unknown-onefs
exit ;;
esac
echo "$0: unable to guess system type" >&2
case "$UNAME_MACHINE:$UNAME_SYSTEM" in
mips:Linux | mips64:Linux)
# If we got here on MIPS GNU/Linux, output extra information.
cat >&2 <&2 </dev/null || echo unknown`
uname -r = `(uname -r) 2>/dev/null || echo unknown`
uname -s = `(uname -s) 2>/dev/null || echo unknown`
uname -v = `(uname -v) 2>/dev/null || echo unknown`
/usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null`
/bin/uname -X = `(/bin/uname -X) 2>/dev/null`
hostinfo = `(hostinfo) 2>/dev/null`
/bin/universe = `(/bin/universe) 2>/dev/null`
/usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null`
/bin/arch = `(/bin/arch) 2>/dev/null`
/usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null`
/usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null`
UNAME_MACHINE = "$UNAME_MACHINE"
UNAME_RELEASE = "$UNAME_RELEASE"
UNAME_SYSTEM = "$UNAME_SYSTEM"
UNAME_VERSION = "$UNAME_VERSION"
EOF
exit 1
# Local variables:
# eval: (add-hook 'write-file-functions 'time-stamp)
# time-stamp-start: "timestamp='"
# time-stamp-format: "%:y-%02m-%02d"
# time-stamp-end: "'"
# End:
liblognorm-2.0.6/tools/ 0000755 0001750 0001750 00000000000 13370251163 012025 5 0000000 0000000 liblognorm-2.0.6/tools/Makefile.am 0000644 0001750 0001750 00000000511 13273030617 013777 0000000 0000000 # Note: slsa is not yet functional with the v2 engine!
#bin_PROGRAMS = slsa
#slsa_SOURCES = slsa.c syntaxes.c ../src/parser.c
#slsa_CPPFLAGS = -I$(top_srcdir)/src $(JSON_C_CFLAGS)
#slsa_LDADD = $(JSON_C_LIBS) $(LIBLOGNORM_LIBS) $(LIBESTR_LIBS)
#slsa_DEPENDENCIES = ../src/liblognorm.la
EXTRA_DIST=logrecord.h
#include_HEADERS=
liblognorm-2.0.6/tools/logrecord.h 0000644 0001750 0001750 00000003226 13273030617 014102 0000000 0000000 /* The (in-memory) format of a log record.
*
* A log record is sequence of nodes of different syntaxes. A log
* record is described by a pointer to its root node.
* The most important node type is literal text, which is always
* assumed if no other syntax is detected. A full list of syntaxes
* can be found below.
*
* Copyright 2015 Rainer Gerhards
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#ifndef LOGRECORD_H_INCLUDED
#define LOGRECORD_H_INCLUDED
#include
/* log record node syntaxes
* This "enumeration" starts at 0 and increments for each new
* syntax. Note that we do not use an enum type so that we can
* streamline the in-memory representation. For large sets of
* log records to be held in main memory, this is important.
*/
#define LRN_SYNTAX_LITERAL_TEXT 0
#define LRN_SYNTAX_IPV4 1
#define LRN_SYNTAX_INT_POSITIVE 2
#define LRN_SYNTAX_DATE_RFC3164 3
struct logrec_node {
struct logrec_node *next; /* NULL: end of record */
int8_t ntype;
union {
char *ltext; /* the literal text */
int64_t number; /* all integer types */
} val;
};
typedef struct logrec_node logrecord_t;
#endif /* ifndef LOGRECORD_H_INCLUDED */
liblognorm-2.0.6/tools/Makefile.in 0000644 0001750 0001750 00000031035 13370251155 014015 0000000 0000000 # Makefile.in generated by automake 1.15.1 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2017 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
# Note: slsa is not yet functional with the v2 engine!
#bin_PROGRAMS = slsa
#slsa_SOURCES = slsa.c syntaxes.c ../src/parser.c
#slsa_CPPFLAGS = -I$(top_srcdir)/src $(JSON_C_CFLAGS)
#slsa_LDADD = $(JSON_C_LIBS) $(LIBLOGNORM_LIBS) $(LIBESTR_LIBS)
#slsa_DEPENDENCIES = ../src/liblognorm.la
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
subdir = tools
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \
$(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
$(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
CONFIG_CLEAN_VPATH_FILES =
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
SOURCES =
DIST_SOURCES =
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
am__DIST_COMMON = $(srcdir)/Makefile.in
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
DLLTOOL = @DLLTOOL@
DSYMUTIL = @DSYMUTIL@
DUMPBIN = @DUMPBIN@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
FEATURE_REGEXP = @FEATURE_REGEXP@
FGREP = @FGREP@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
JSON_C_CFLAGS = @JSON_C_CFLAGS@
JSON_C_LIBS = @JSON_C_LIBS@
LD = @LD@
LDFLAGS = @LDFLAGS@
LIBESTR_CFLAGS = @LIBESTR_CFLAGS@
LIBESTR_LIBS = @LIBESTR_LIBS@
LIBLOGNORM_CFLAGS = @LIBLOGNORM_CFLAGS@
LIBLOGNORM_LIBS = @LIBLOGNORM_LIBS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LIBTOOL = @LIBTOOL@
LIPO = @LIPO@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
MAKEINFO = @MAKEINFO@
MANIFEST_TOOL = @MANIFEST_TOOL@
MKDIR_P = @MKDIR_P@
NM = @NM@
NMEDIT = @NMEDIT@
OBJDUMP = @OBJDUMP@
OBJEXT = @OBJEXT@
OTOOL = @OTOOL@
OTOOL64 = @OTOOL64@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PCRE_CFLAGS = @PCRE_CFLAGS@
PCRE_LIBS = @PCRE_LIBS@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
RANLIB = @RANLIB@
SED = @SED@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
SPHINXBUILD = @SPHINXBUILD@
STRIP = @STRIP@
VALGRIND = @VALGRIND@
VERSION = @VERSION@
WARN_CFLAGS = @WARN_CFLAGS@
WARN_LDFLAGS = @WARN_LDFLAGS@
WARN_SCANNERFLAGS = @WARN_SCANNERFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_AR = @ac_ct_AR@
ac_ct_CC = @ac_ct_CC@
ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host = @host@
host_alias = @host_alias@
host_cpu = @host_cpu@
host_os = @host_os@
host_vendor = @host_vendor@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkg_config_libs_private = @pkg_config_libs_private@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
EXTRA_DIST = logrecord.h
all: all-am
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
&& { if test -f $@; then exit 0; else break; fi; }; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu tools/Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --gnu tools/Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(am__aclocal_m4_deps):
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
tags TAGS:
ctags CTAGS:
cscope cscopelist:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile
installdirs:
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic clean-libtool mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am:
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am:
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man:
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic mostlyclean-libtool
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am:
.MAKE: install-am install-strip
.PHONY: all all-am check check-am clean clean-generic clean-libtool \
cscopelist-am ctags-am distclean distclean-generic \
distclean-libtool distdir dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am install-dvi \
install-dvi-am install-exec install-exec-am install-html \
install-html-am install-info install-info-am install-man \
install-pdf install-pdf-am install-ps install-ps-am \
install-strip installcheck installcheck-am installdirs \
maintainer-clean maintainer-clean-generic mostlyclean \
mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
tags-am uninstall uninstall-am
.PRECIOUS: Makefile
#include_HEADERS=
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:
liblognorm-2.0.6/lognorm.pc.in 0000644 0001750 0001750 00000000440 13273030617 013212 0000000 0000000 prefix=@prefix@
exec_prefix=@exec_prefix@
libdir=@libdir@
includedir=@includedir@
Name: lognorm
Description: fast samples-based log normalization library
Version: @VERSION@
Requires: libfastjson
Libs: -L${libdir} -llognorm
Libs.private: @pkg_config_libs_private@
Cflags: -I${includedir}
liblognorm-2.0.6/install-sh 0000755 0001750 0001750 00000035463 13370251154 012624 0000000 0000000 #!/bin/sh
# install - install a program, script, or datafile
scriptversion=2014-09-12.12; # UTC
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# 'make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch.
tab=' '
nl='
'
IFS=" $tab$nl"
# Set DOITPROG to "echo" to test this script.
doit=${DOITPROG-}
doit_exec=${doit:-exec}
# Put in absolute file names if you don't have them in your path;
# or use environment vars.
chgrpprog=${CHGRPPROG-chgrp}
chmodprog=${CHMODPROG-chmod}
chownprog=${CHOWNPROG-chown}
cmpprog=${CMPPROG-cmp}
cpprog=${CPPROG-cp}
mkdirprog=${MKDIRPROG-mkdir}
mvprog=${MVPROG-mv}
rmprog=${RMPROG-rm}
stripprog=${STRIPPROG-strip}
posix_mkdir=
# Desired mode of installed file.
mode=0755
chgrpcmd=
chmodcmd=$chmodprog
chowncmd=
mvcmd=$mvprog
rmcmd="$rmprog -f"
stripcmd=
src=
dst=
dir_arg=
dst_arg=
copy_on_change=false
is_target_a_directory=possibly
usage="\
Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
--help display this help and exit.
--version display version info and exit.
-c (ignored)
-C install only if different (preserve the last data modification time)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-s $stripprog installed files.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG
RMPROG STRIPPROG
"
while test $# -ne 0; do
case $1 in
-c) ;;
-C) copy_on_change=true;;
-d) dir_arg=true;;
-g) chgrpcmd="$chgrpprog $2"
shift;;
--help) echo "$usage"; exit $?;;
-m) mode=$2
case $mode in
*' '* | *"$tab"* | *"$nl"* | *'*'* | *'?'* | *'['*)
echo "$0: invalid mode: $mode" >&2
exit 1;;
esac
shift;;
-o) chowncmd="$chownprog $2"
shift;;
-s) stripcmd=$stripprog;;
-t)
is_target_a_directory=always
dst_arg=$2
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
shift;;
-T) is_target_a_directory=never;;
--version) echo "$0 $scriptversion"; exit $?;;
--) shift
break;;
-*) echo "$0: invalid option: $1" >&2
exit 1;;
*) break;;
esac
shift
done
# We allow the use of options -d and -T together, by making -d
# take the precedence; this is for compatibility with GNU install.
if test -n "$dir_arg"; then
if test -n "$dst_arg"; then
echo "$0: target directory not allowed when installing a directory." >&2
exit 1
fi
fi
if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then
# When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dst_arg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dst_arg"
shift # fnord
fi
shift # arg
dst_arg=$arg
# Protect names problematic for 'test' and other utilities.
case $dst_arg in
-* | [=\(\)!]) dst_arg=./$dst_arg;;
esac
done
fi
if test $# -eq 0; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call 'install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
if test -z "$dir_arg"; then
if test $# -gt 1 || test "$is_target_a_directory" = always; then
if test ! -d "$dst_arg"; then
echo "$0: $dst_arg: Is not a directory." >&2
exit 1
fi
fi
fi
if test -z "$dir_arg"; then
do_exit='(exit $ret); exit $ret'
trap "ret=129; $do_exit" 1
trap "ret=130; $do_exit" 2
trap "ret=141; $do_exit" 13
trap "ret=143; $do_exit" 15
# Set umask so as not to create temps with too-generous modes.
# However, 'strip' requires both read and write access to temps.
case $mode in
# Optimize common cases.
*644) cp_umask=133;;
*755) cp_umask=22;;
*[0-7])
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw='% 200'
fi
cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;;
*)
if test -z "$stripcmd"; then
u_plus_rw=
else
u_plus_rw=,u+rw
fi
cp_umask=$mode$u_plus_rw;;
esac
fi
for src
do
# Protect names problematic for 'test' and other utilities.
case $src in
-* | [=\(\)!]) src=./$src;;
esac
if test -n "$dir_arg"; then
dst=$src
dstdir=$dst
test -d "$dstdir"
dstdir_status=$?
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dst_arg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dst_arg
# If destination is a directory, append the input filename; won't work
# if double slashes aren't ignored.
if test -d "$dst"; then
if test "$is_target_a_directory" = never; then
echo "$0: $dst_arg: Is a directory" >&2
exit 1
fi
dstdir=$dst
dst=$dstdir/`basename "$src"`
dstdir_status=0
else
dstdir=`dirname "$dst"`
test -d "$dstdir"
dstdir_status=$?
fi
fi
obsolete_mkdir_used=false
if test $dstdir_status != 0; then
case $posix_mkdir in
'')
# Create intermediate dirs using mode 755 as modified by the umask.
# This is like FreeBSD 'install' as of 1997-10-28.
umask=`umask`
case $stripcmd.$umask in
# Optimize common cases.
*[2367][2367]) mkdir_umask=$umask;;
.*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;;
*[0-7])
mkdir_umask=`expr $umask + 22 \
- $umask % 100 % 40 + $umask % 20 \
- $umask % 10 % 4 + $umask % 2
`;;
*) mkdir_umask=$umask,go-w;;
esac
# With -d, create the new directory with the user-specified mode.
# Otherwise, rely on $mkdir_umask.
if test -n "$dir_arg"; then
mkdir_mode=-m$mode
else
mkdir_mode=
fi
posix_mkdir=false
case $umask in
*[123567][0-7][0-7])
# POSIX mkdir -p sets u+wx bits regardless of umask, which
# is incompatible with FreeBSD 'install' when (umask & 300) != 0.
;;
*)
# $RANDOM is not portable (e.g. dash); use it when possible to
# lower collision chance
tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$
trap 'ret=$?; rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir" 2>/dev/null; exit $ret' 0
# As "mkdir -p" follows symlinks and we work in /tmp possibly; so
# create the $tmpdir first (and fail if unsuccessful) to make sure
# that nobody tries to guess the $tmpdir name.
if (umask $mkdir_umask &&
$mkdirprog $mkdir_mode "$tmpdir" &&
exec $mkdirprog $mkdir_mode -p -- "$tmpdir/a/b") >/dev/null 2>&1
then
if test -z "$dir_arg" || {
# Check for POSIX incompatibilities with -m.
# HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or
# other-writable bit of parent directory when it shouldn't.
# FreeBSD 6.1 mkdir -m -p sets mode of existing directory.
test_tmpdir="$tmpdir/a"
ls_ld_tmpdir=`ls -ld "$test_tmpdir"`
case $ls_ld_tmpdir in
d????-?r-*) different_mode=700;;
d????-?--*) different_mode=755;;
*) false;;
esac &&
$mkdirprog -m$different_mode -p -- "$test_tmpdir" && {
ls_ld_tmpdir_1=`ls -ld "$test_tmpdir"`
test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1"
}
}
then posix_mkdir=:
fi
rmdir "$tmpdir/a/b" "$tmpdir/a" "$tmpdir"
else
# Remove any dirs left behind by ancient mkdir implementations.
rmdir ./$mkdir_mode ./-p ./-- "$tmpdir" 2>/dev/null
fi
trap '' 0;;
esac;;
esac
if
$posix_mkdir && (
umask $mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir"
)
then :
else
# The umask is ridiculous, or mkdir does not conform to POSIX,
# or it failed possibly due to a race condition. Create the
# directory the slow way, step by step, checking for races as we go.
case $dstdir in
/*) prefix='/';;
[-=\(\)!]*) prefix='./';;
*) prefix='';;
esac
oIFS=$IFS
IFS=/
set -f
set fnord $dstdir
shift
set +f
IFS=$oIFS
prefixes=
for d
do
test X"$d" = X && continue
prefix=$prefix$d
if test -d "$prefix"; then
prefixes=
else
if $posix_mkdir; then
(umask=$mkdir_umask &&
$doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break
# Don't fail if two instances are running concurrently.
test -d "$prefix" || exit 1
else
case $prefix in
*\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;;
*) qprefix=$prefix;;
esac
prefixes="$prefixes '$qprefix'"
fi
fi
prefix=$prefix/
done
if test -n "$prefixes"; then
# Don't fail if two instances are running concurrently.
(umask $mkdir_umask &&
eval "\$doit_exec \$mkdirprog $prefixes") ||
test -d "$dstdir" || exit 1
obsolete_mkdir_used=true
fi
fi
fi
if test -n "$dir_arg"; then
{ test -z "$chowncmd" || $doit $chowncmd "$dst"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } &&
{ test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false ||
test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1
else
# Make a couple of temp file names in the proper directory.
dsttmp=$dstdir/_inst.$$_
rmtmp=$dstdir/_rm.$$_
# Trap to clean up those temp files at exit.
trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0
# Copy the file name to the temp name.
(umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } &&
{ test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } &&
{ test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } &&
{ test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } &&
# If -C, don't bother to copy if it wouldn't change the file.
if $copy_on_change &&
old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` &&
new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` &&
set -f &&
set X $old && old=:$2:$4:$5:$6 &&
set X $new && new=:$2:$4:$5:$6 &&
set +f &&
test "$old" = "$new" &&
$cmpprog "$dst" "$dsttmp" >/dev/null 2>&1
then
rm -f "$dsttmp"
else
# Rename the file to the real destination.
$doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null ||
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
{
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
test ! -f "$dst" ||
$doit $rmcmd -f "$dst" 2>/dev/null ||
{ $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null &&
{ $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; }
} ||
{ echo "$0: cannot unlink or rename $dst" >&2
(exit 1); exit 1
}
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dst"
}
fi || exit 1
trap '' 0
fi
done
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-time-zone: "UTC"
# time-stamp-end: "; # UTC"
# End:
liblognorm-2.0.6/NEWS 0000644 0001750 0001750 00000000077 13273030617 011311 0000000 0000000 This file has been superseeded by ChangeLog. Please see there.
liblognorm-2.0.6/COPYING 0000644 0001750 0001750 00000062620 13273030617 011647 0000000 0000000
GNU LESSER GENERAL PUBLIC LICENSE
Version 2.1, February 1999
Copyright (C) 1991, 1999 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
[This is the first released version of the Lesser GPL. It also counts
as the successor of the GNU Library Public License, version 2, hence
the version number 2.1.]
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
Licenses are intended to guarantee your freedom to share and change
free software--to make sure the software is free for all its users.
This license, the Lesser General Public License, applies to some
specially designated software packages--typically libraries--of the
Free Software Foundation and other authors who decide to use it. You
can use it too, but we suggest you first think carefully about whether
this license or the ordinary General Public License is the better
strategy to use in any particular case, based on the explanations below.
When we speak of free software, we are referring to freedom of use,
not price. Our General Public Licenses are designed to make sure that
you have the freedom to distribute copies of free software (and charge
for this service if you wish); that you receive source code or can get
it if you want it; that you can change the software and use pieces of
it in new free programs; and that you are informed that you can do
these things.
To protect your rights, we need to make restrictions that forbid
distributors to deny you these rights or to ask you to surrender these
rights. These restrictions translate to certain responsibilities for
you if you distribute copies of the library or if you modify it.
For example, if you distribute copies of the library, whether gratis
or for a fee, you must give the recipients all the rights that we gave
you. You must make sure that they, too, receive or can get the source
code. If you link other code with the library, you must provide
complete object files to the recipients, so that they can relink them
with the library after making changes to the library and recompiling
it. And you must show them these terms so they know their rights.
We protect your rights with a two-step method: (1) we copyright the
library, and (2) we offer you this license, which gives you legal
permission to copy, distribute and/or modify the library.
To protect each distributor, we want to make it very clear that
there is no warranty for the free library. Also, if the library is
modified by someone else and passed on, the recipients should know
that what they have is not the original version, so that the original
author's reputation will not be affected by problems that might be
introduced by others.
Finally, software patents pose a constant threat to the existence of
any free program. We wish to make sure that a company cannot
effectively restrict the users of a free program by obtaining a
restrictive license from a patent holder. Therefore, we insist that
any patent license obtained for a version of the library must be
consistent with the full freedom of use specified in this license.
Most GNU software, including some libraries, is covered by the
ordinary GNU General Public License. This license, the GNU Lesser
General Public License, applies to certain designated libraries, and
is quite different from the ordinary General Public License. We use
this license for certain libraries in order to permit linking those
libraries into non-free programs.
When a program is linked with a library, whether statically or using
a shared library, the combination of the two is legally speaking a
combined work, a derivative of the original library. The ordinary
General Public License therefore permits such linking only if the
entire combination fits its criteria of freedom. The Lesser General
Public License permits more lax criteria for linking other code with
the library.
We call this license the "Lesser" General Public License because it
does Less to protect the user's freedom than the ordinary General
Public License. It also provides other free software developers Less
of an advantage over competing non-free programs. These disadvantages
are the reason we use the ordinary General Public License for many
libraries. However, the Lesser license provides advantages in certain
special circumstances.
For example, on rare occasions, there may be a special need to
encourage the widest possible use of a certain library, so that it becomes
a de-facto standard. To achieve this, non-free programs must be
allowed to use the library. A more frequent case is that a free
library does the same job as widely used non-free libraries. In this
case, there is little to gain by limiting the free library to free
software only, so we use the Lesser General Public License.
In other cases, permission to use a particular library in non-free
programs enables a greater number of people to use a large body of
free software. For example, permission to use the GNU C Library in
non-free programs enables many more people to use the whole GNU
operating system, as well as its variant, the GNU/Linux operating
system.
Although the Lesser General Public License is Less protective of the
users' freedom, it does ensure that the user of a program that is
linked with the Library has the freedom and the wherewithal to run
that program using a modified version of the Library.
The precise terms and conditions for copying, distribution and
modification follow. Pay close attention to the difference between a
"work based on the library" and a "work that uses the library". The
former contains code derived from the library, whereas the latter must
be combined with the library in order to run.
GNU LESSER GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License Agreement applies to any software library or other
program which contains a notice placed by the copyright holder or
other authorized party saying it may be distributed under the terms of
this Lesser General Public License (also called "this License").
Each licensee is addressed as "you".
A "library" means a collection of software functions and/or data
prepared so as to be conveniently linked with application programs
(which use some of those functions and data) to form executables.
The "Library", below, refers to any such software library or work
which has been distributed under these terms. A "work based on the
Library" means either the Library or any derivative work under
copyright law: that is to say, a work containing the Library or a
portion of it, either verbatim or with modifications and/or translated
straightforwardly into another language. (Hereinafter, translation is
included without limitation in the term "modification".)
"Source code" for a work means the preferred form of the work for
making modifications to it. For a library, complete source code means
all the source code for all modules it contains, plus any associated
interface definition files, plus the scripts used to control compilation
and installation of the library.
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running a program using the Library is not restricted, and output from
such a program is covered only if its contents constitute a work based
on the Library (independent of the use of the Library in a tool for
writing it). Whether that is true depends on what the Library does
and what the program that uses the Library does.
1. You may copy and distribute verbatim copies of the Library's
complete source code as you receive it, in any medium, provided that
you conspicuously and appropriately publish on each copy an
appropriate copyright notice and disclaimer of warranty; keep intact
all the notices that refer to this License and to the absence of any
warranty; and distribute a copy of this License along with the
Library.
You may charge a fee for the physical act of transferring a copy,
and you may at your option offer warranty protection in exchange for a
fee.
2. You may modify your copy or copies of the Library or any portion
of it, thus forming a work based on the Library, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) The modified work must itself be a software library.
b) You must cause the files modified to carry prominent notices
stating that you changed the files and the date of any change.
c) You must cause the whole of the work to be licensed at no
charge to all third parties under the terms of this License.
d) If a facility in the modified Library refers to a function or a
table of data to be supplied by an application program that uses
the facility, other than as an argument passed when the facility
is invoked, then you must make a good faith effort to ensure that,
in the event an application does not supply such function or
table, the facility still operates, and performs whatever part of
its purpose remains meaningful.
(For example, a function in a library to compute square roots has
a purpose that is entirely well-defined independent of the
application. Therefore, Subsection 2d requires that any
application-supplied function or table used by this function must
be optional: if the application does not supply it, the square
root function must still compute square roots.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Library,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Library, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote
it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Library.
In addition, mere aggregation of another work not based on the Library
with the Library (or with a work based on the Library) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may opt to apply the terms of the ordinary GNU General Public
License instead of this License to a given copy of the Library. To do
this, you must alter all the notices that refer to this License, so
that they refer to the ordinary GNU General Public License, version 2,
instead of to this License. (If a newer version than version 2 of the
ordinary GNU General Public License has appeared, then you can specify
that version instead if you wish.) Do not make any other change in
these notices.
Once this change is made in a given copy, it is irreversible for
that copy, so the ordinary GNU General Public License applies to all
subsequent copies and derivative works made from that copy.
This option is useful when you wish to copy part of the code of
the Library into a program that is not a library.
4. You may copy and distribute the Library (or a portion or
derivative of it, under Section 2) in object code or executable form
under the terms of Sections 1 and 2 above provided that you accompany
it with the complete corresponding machine-readable source code, which
must be distributed under the terms of Sections 1 and 2 above on a
medium customarily used for software interchange.
If distribution of object code is made by offering access to copy
from a designated place, then offering equivalent access to copy the
source code from the same place satisfies the requirement to
distribute the source code, even though third parties are not
compelled to copy the source along with the object code.
5. A program that contains no derivative of any portion of the
Library, but is designed to work with the Library by being compiled or
linked with it, is called a "work that uses the Library". Such a
work, in isolation, is not a derivative work of the Library, and
therefore falls outside the scope of this License.
However, linking a "work that uses the Library" with the Library
creates an executable that is a derivative of the Library (because it
contains portions of the Library), rather than a "work that uses the
library". The executable is therefore covered by this License.
Section 6 states terms for distribution of such executables.
When a "work that uses the Library" uses material from a header file
that is part of the Library, the object code for the work may be a
derivative work of the Library even though the source code is not.
Whether this is true is especially significant if the work can be
linked without the Library, or if the work is itself a library. The
threshold for this to be true is not precisely defined by law.
If such an object file uses only numerical parameters, data
structure layouts and accessors, and small macros and small inline
functions (ten lines or less in length), then the use of the object
file is unrestricted, regardless of whether it is legally a derivative
work. (Executables containing this object code plus portions of the
Library will still fall under Section 6.)
Otherwise, if the work is a derivative of the Library, you may
distribute the object code for the work under the terms of Section 6.
Any executables containing that work also fall under Section 6,
whether or not they are linked directly with the Library itself.
6. As an exception to the Sections above, you may also combine or
link a "work that uses the Library" with the Library to produce a
work containing portions of the Library, and distribute that work
under terms of your choice, provided that the terms permit
modification of the work for the customer's own use and reverse
engineering for debugging such modifications.
You must give prominent notice with each copy of the work that the
Library is used in it and that the Library and its use are covered by
this License. You must supply a copy of this License. If the work
during execution displays copyright notices, you must include the
copyright notice for the Library among them, as well as a reference
directing the user to the copy of this License. Also, you must do one
of these things:
a) Accompany the work with the complete corresponding
machine-readable source code for the Library including whatever
changes were used in the work (which must be distributed under
Sections 1 and 2 above); and, if the work is an executable linked
with the Library, with the complete machine-readable "work that
uses the Library", as object code and/or source code, so that the
user can modify the Library and then relink to produce a modified
executable containing the modified Library. (It is understood
that the user who changes the contents of definitions files in the
Library will not necessarily be able to recompile the application
to use the modified definitions.)
b) Use a suitable shared library mechanism for linking with the
Library. A suitable mechanism is one that (1) uses at run time a
copy of the library already present on the user's computer system,
rather than copying library functions into the executable, and (2)
will operate properly with a modified version of the library, if
the user installs one, as long as the modified version is
interface-compatible with the version that the work was made with.
c) Accompany the work with a written offer, valid for at
least three years, to give the same user the materials
specified in Subsection 6a, above, for a charge no more
than the cost of performing this distribution.
d) If distribution of the work is made by offering access to copy
from a designated place, offer equivalent access to copy the above
specified materials from the same place.
e) Verify that the user has already received a copy of these
materials or that you have already sent this user a copy.
For an executable, the required form of the "work that uses the
Library" must include any data and utility programs needed for
reproducing the executable from it. However, as a special exception,
the materials to be distributed need not include anything that is
normally distributed (in either source or binary form) with the major
components (compiler, kernel, and so on) of the operating system on
which the executable runs, unless that component itself accompanies
the executable.
It may happen that this requirement contradicts the license
restrictions of other proprietary libraries that do not normally
accompany the operating system. Such a contradiction means you cannot
use both them and the Library together in an executable that you
distribute.
7. You may place library facilities that are a work based on the
Library side-by-side in a single library together with other library
facilities not covered by this License, and distribute such a combined
library, provided that the separate distribution of the work based on
the Library and of the other library facilities is otherwise
permitted, and provided that you do these two things:
a) Accompany the combined library with a copy of the same work
based on the Library, uncombined with any other library
facilities. This must be distributed under the terms of the
Sections above.
b) Give prominent notice with the combined library of the fact
that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
8. You may not copy, modify, sublicense, link with, or distribute
the Library except as expressly provided under this License. Any
attempt otherwise to copy, modify, sublicense, link with, or
distribute the Library is void, and will automatically terminate your
rights under this License. However, parties who have received copies,
or rights, from you under this License will not have their licenses
terminated so long as such parties remain in full compliance.
9. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Library or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Library (or any work based on the
Library), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Library or works based on it.
10. Each time you redistribute the Library (or any work based on the
Library), the recipient automatically receives a license from the
original licensor to copy, distribute, link with or modify the Library
subject to these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties with
this License.
11. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Library at all. For example, if a patent
license would not permit royalty-free redistribution of the Library by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Library.
If any portion of this section is held invalid or unenforceable under any
particular circumstance, the balance of the section is intended to apply,
and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
12. If the distribution and/or use of the Library is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Library under this License may add
an explicit geographical distribution limitation excluding those countries,
so that distribution is permitted only in or among countries not thus
excluded. In such case, this License incorporates the limitation as if
written in the body of this License.
13. The Free Software Foundation may publish revised and/or new
versions of the Lesser General Public License from time to time.
Such new versions will be similar in spirit to the present version,
but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Library
specifies a version number of this License which applies to it and
"any later version", you have the option of following the terms and
conditions either of that version or of any later version published by
the Free Software Foundation. If the Library does not specify a
license version number, you may choose any version ever published by
the Free Software Foundation.
14. If you wish to incorporate parts of the Library into other free
programs whose distribution conditions are incompatible with these,
write to the author to ask for permission. For software which is
copyrighted by the Free Software Foundation, write to the Free
Software Foundation; we sometimes make exceptions for this. Our
decision will be guided by the two goals of preserving the free status
of all derivatives of our free software and of promoting the sharing
and reuse of software generally.
NO WARRANTY
15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Libraries
If you develop a new library, and you want it to be of the greatest
possible use to the public, we recommend making it free software that
everyone can redistribute and change. You can do so by permitting
redistribution under these terms (or, alternatively, under the terms of the
ordinary General Public License).
To apply these terms, attach the following notices to the library. It is
safest to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least the
"copyright" line and a pointer to where the full notice is found.
liblognorm, a fast samples-based log normalization library
Copyright (C) 2010 Rainer Gerhards
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
liblognorm-2.0.6/rulebases/ 0000755 0001750 0001750 00000000000 13273030617 012653 5 0000000 0000000 liblognorm-2.0.6/rulebases/messages.rulebase 0000644 0001750 0001750 00000001144 13273030617 016126 0000000 0000000 prefix=%date:date-rfc3164% %host:word% %tag:char-to:\x3a%:
rule=: restart.
rule=: Bad line received from identity server at %ip:ipv4%: %port:number%
rule=: FTP session closed
rule=: wu-ftpd - TLS settings: control %wuftp-control:char-to:,%, client_cert %wuftp-clcert:char-to:,%, data %wuftp-allow:word%
rule=: User %user:word% timed out after %timeout:number% seconds at %otherdatesyntax:word% %otherdate:date-rfc3164% %otheryear:word%
rule=: getpeername (in.ftpd): Transport endpoint is not connected
# the one below is problematic (and needs some backtracking)
#: %disk:char-to:\x3a%: timeout waiting for DMA
liblognorm-2.0.6/rulebases/syntax.txt 0000644 0001750 0001750 00000010361 13273030617 014663 0000000 0000000 WARNING
=======
This file is somewhat obsolete, for current information look at doc/
directory.
Basic syntax
============
Each line in rulebase file is evaluated separately.
Lines starting with '#' are commentaries.
Empty lines are just skipped, they can be inserted for readability.
If the line starts with 'rule=', then it contains a rule. This line has
following format:
rule=[[,...]]:
Everything before a colon is treated as comma-separated list of tags, which
will be attached to a match. After the colon, match description should be
given. It consists of string literals and field selectors. String literals
should match exactly. Field selector has this format:
%:[:]%
Percent sign is used to enclose field selector. If you need to match literal
'%', it can be written as '%%' or '\x25'.
Behaviour of field selector depends on its type, which is decribed below.
If field name is set to '-', this field is matched but not saved.
Several rules can have a common prefix. You can set it once with this syntax:
prefix=
Every following rule will be treated as an addition to this prefix.
Prefix can be reset to default (empty value) by the line:
prefix=
Tags of the matched rule are attached to the message and can be used to
annotate it. Annotation allows to add fixed fields to the message.
Syntax is as following:
annotate=:+=""
Field value should always be enclosed in double quote marks.
There can be multiple annotations for the same tag.
Field types
===========
Field type: 'number'
Matches: One or more decimal digits.
Extra data: Not used
Example: %field_name:number%
Field type: 'word'
Matches: One or more characters, up to the next space (\x20), or
up to end of line.
Extra data: Not used
Example: %field_name:word%
Field type: 'alpha'
Matches: One or more alphabetic characters, up to the next
whitespace, punctuation, decimal digit or ctrl.
Extra data: Not used
Example: %field_name:alpha%
Field type: 'char-to'
Matches: One or more characters, up to the next character given in
extra data.
Extra data: One character (can be escaped)
Example: %field_name:char-to:,%
%field_name:char-to:\x25%
Field type: 'char-sep'
Matches: Zero or more characters, up to the next character given in
extra data, or up to end of line.
Extra data: One character (can be escaped)
Example: %field_name:char-sep:,%
%field_name:char-sep:\x25%
Field type: 'rest'
Matches: Zero or more characters till end of line.
Extra data: Not used
Example: %field_name:rest%
Notes: Should be always at end of the rule.
Field type: 'quoted-string'
Matches: Zero or more characters, surrounded by double quote marks.
Extra data: Not used
Example: %field_name:quoted-string%
Notes: Quote marks are stripped from the match.
Field type: 'date-iso'
Matches: Date of format 'YYYY-MM-DD'.
Extra data: Not used
Example: %field-name:date-iso%
Field type: 'time-24hr'
Matches: Time of format 'HH:MM:SS', where HH is 00..23.
Extra data: Not used
Example: %field_name:time-24hr%
Field type: 'time-12hr'
Matches: Time of format 'HH:MM:SS', where HH is 00..12.
Extra data: Not used
Example: %field_name:time-12hr%
Field type: 'ipv4'
Matches: IPv4 address, in dot-decimal notation (AAA.BBB.CCC.DDD).
Extra data: Not used
Example: %field_name:ipv4%
Field type: 'date-rfc3164'
Matches: Valid date/time in RFC3164 format, i.e.: 'Oct 29 09:47:08'
Extra data: Not used
Example: %field_name:date-rfc3164%
Notes: This parser implements several quirks to match malformed
timestamps from some devices.
Field type: 'date-rfc5424'
Matches: Valid date/time in RFC5424 format, i.e.:
'1985-04-12T19:20:50.52-04:00'
Extra data: Not used
Example: %field_name:date-rfc5424%
Notes: Slightly different formats are allowed.
Field type: 'iptables'
Matches: Name=value pairs, separated by spaces, as in Netfilter log
messages.
Extra data: Not used
Example: %-:iptables%
Notes: Name of the selector is not used; names from the line are
used instead. This selector always matches everything till
end of the line. Cannot match zero characters.
Examples
========
Look at sample.rulebase for example rules and matching lines.
liblognorm-2.0.6/rulebases/sample.rulebase 0000644 0001750 0001750 00000003641 13273030617 015604 0000000 0000000 # Some sample rules and strings matching them
# Prefix sample:
# myhostname: code=23
prefix=%host:char-to:\x3a%:
rule=prefixed_code:code=%code:number%
# myhostname: name=somename
rule=prefixed_name:name=%name:word%
# Reset prefix to default (empty value):
prefix=
# Quantity: 555
rule=tag1:Quantity: %N:number%
# Weight: 42kg
rule=tag2:Weight: %N:number%%unit:word%
annotate=tag2:+fat="free"
# %%
rule=tag3,percent:\x25%%
annotate=percent:+percent="100"
annotate=tag3:+whole="whale"
annotate=tag3:+part="wha"
# literal
rule=tag4,tag5,tag6,tag4:literal
annotate=tag4:+this="that"
# first field,second field,third field,fourth field
rule=csv:%r1:char-to:,%,%r2:char-to:,%,%r3:char-to:,%,%r4:rest%
# CSV: field1,,field3
rule=better-csv:CSV: %f1:char-sep:,%,%f2:char-sep:,%,%f3:char-sep:,%
# Snow White and the Seven Dwarfs
rule=tale:Snow White and %company:rest%
# iptables: SRC=192.168.1.134 DST=46.252.161.13 LEN=48 TOS=0x00 PREC=0x00
rule=ipt:iptables: %dummy:iptables%
# 2012-10-11 src=127.0.0.1 dst=88.111.222.19
rule=:%date:date-iso% src=%src:ipv4% dst=%dst:ipv4%
# Oct 29 09:47:08 server rsyslogd: rsyslogd's groupid changed to 103
rule=syslog:%date1:date-rfc3164% %host:word% %tag:char-to:\x3a%: %text:rest%
# Oct 29 09:47:08
rule=rfc3164:%date1:date-rfc3164%
# 1985-04-12T19:20:50.52-04:00
rule=rfc5424:%date1:date-rfc5424%
# 1985-04-12T19:20:50.52-04:00 testing 123
rule=rfc5424:%date1:date-rfc5424% %test:word% %test2:number%
# quoted_string="Contents of a quoted string cannot include quote marks"
rule=quote:quoted_string=%quote:quoted-string%
# tokenized words: aaa.org; bbb.com; ccc.net
rule=tokenized_words:tokenized words: %arr:tokenized:; :char-sep:\x3b%
# tokenized regex: aaa.org; bbb.com; ccc.net
rule=tokenized_regex:tokenized regex: %arr:tokenized:; :regex:[^; ]+%
# regex: abcdef
rule=regex:regex: %token:regex:abc.ef%
# host451
# generates { basename:"host", hostid:451 }
rule=:%basename:alpha%%hostid:number%
liblognorm-2.0.6/rulebases/cisco.rulebase 0000644 0001750 0001750 00000001246 13273030617 015422 0000000 0000000 prefix=%date:date-rfc3164% %host:word% %seqnum:number%: %othseq:char-to:\x3a%: %%%tag:char-to:\x3a%:
rule=: Configured from console by %tty:word:% (%ip:ipv4%)
rule=: Authentication failure for %proto:word% req from host %ip:ipv4%
rule=: Interface %interface:char-to:,%, changed state to %state:word%
rule=: Line protocol on Interface %interface:char-to:,%, changed state to %state:word%
rule=: Attempted to connect to %servname:word% from %ip:ipv4%
# too-generic syntaces (like %port:word% below) cause problems.
# Best is to have very specific syntaxes, but as an
# interim solution we may need to backtrack if there is no other way to handle it.
#: %port:word% transmit error
liblognorm-2.0.6/config.h.in 0000644 0001750 0001750 00000007306 13370251154 012636 0000000 0000000 /* config.h.in. Generated from configure.ac by autoheader. */
/* Defined if advanced statistics are enabled. */
#undef ADVANCED_STATS
/* Defined if debug mode is enabled. */
#undef DEBUG
/* Regular expressions support enabled. */
#undef FEATURE_REGEXP
/* Define to 1 if you have the declaration of `strerror_r', and to 0 if you
don't. */
#undef HAVE_DECL_STRERROR_R
/* Define to 1 if you have the header file. */
#undef HAVE_DLFCN_H
/* Define to 1 if you have the header file. */
#undef HAVE_INTTYPES_H
/* Define to 1 if you have the header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the header file. */
#undef HAVE_STDLIB_H
/* Define to 1 if you have the `strdup' function. */
#undef HAVE_STRDUP
/* Define to 1 if you have the `strerror_r' function. */
#undef HAVE_STRERROR_R
/* Define to 1 if you have the header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the `strndup' function. */
#undef HAVE_STRNDUP
/* Define to 1 if you have the `strtok_r' function. */
#undef HAVE_STRTOK_R
/* Define to 1 if you have the header file. */
#undef HAVE_SYS_SELECT_H
/* Define to 1 if you have the header file. */
#undef HAVE_SYS_SOCKET_H
/* Define to 1 if you have the header file. */
#undef HAVE_SYS_STAT_H
/* Define to 1 if you have the header file. */
#undef HAVE_SYS_TYPES_H
/* Define to 1 if you have the header file. */
#undef HAVE_UNISTD_H
/* Define to the sub-directory where libtool stores uninstalled libraries. */
#undef LT_OBJDIR
/* Defined if debug mode is disabled. */
#undef NDEBUG
/* Name of package */
#undef PACKAGE
/* Define to the address where bug reports for this package should be sent. */
#undef PACKAGE_BUGREPORT
/* Define to the full name of this package. */
#undef PACKAGE_NAME
/* Define to the full name and version of this package. */
#undef PACKAGE_STRING
/* Define to the one symbol short name of this package. */
#undef PACKAGE_TARNAME
/* Define to the home page for this package. */
#undef PACKAGE_URL
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define as the return type of signal handlers (`int' or `void'). */
#undef RETSIGTYPE
/* Define to the type of arg 1 for `select'. */
#undef SELECT_TYPE_ARG1
/* Define to the type of args 2, 3 and 4 for `select'. */
#undef SELECT_TYPE_ARG234
/* Define to the type of arg 5 for `select'. */
#undef SELECT_TYPE_ARG5
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Define to 1 if strerror_r returns char *. */
#undef STRERROR_R_CHAR_P
/* Enable extensions on AIX 3, Interix. */
#ifndef _ALL_SOURCE
# undef _ALL_SOURCE
#endif
/* Enable GNU extensions on systems that have them. */
#ifndef _GNU_SOURCE
# undef _GNU_SOURCE
#endif
/* Enable threading extensions on Solaris. */
#ifndef _POSIX_PTHREAD_SEMANTICS
# undef _POSIX_PTHREAD_SEMANTICS
#endif
/* Enable extensions on HP NonStop. */
#ifndef _TANDEM_SOURCE
# undef _TANDEM_SOURCE
#endif
/* Enable general extensions on Solaris. */
#ifndef __EXTENSIONS__
# undef __EXTENSIONS__
#endif
/* Version number of package */
#undef VERSION
/* Define to 1 if on MINIX. */
#undef _MINIX
/* Define to 2 if the system does not provide POSIX.1 features except with
this defined. */
#undef _POSIX_1_SOURCE
/* Define to 1 if you need to in order for `stat' and other things to work. */
#undef _POSIX_SOURCE
/* Define to empty if `const' does not conform to ANSI C. */
#undef const
/* Define to `unsigned int' if does not define. */
#undef size_t
liblognorm-2.0.6/src/ 0000755 0001750 0001750 00000000000 13370251163 011454 5 0000000 0000000 liblognorm-2.0.6/src/enc_syslog.c 0000644 0001750 0001750 00000012371 13273030617 013712 0000000 0000000 /**
* @file enc_syslog.c
* Encoder for syslog format.
* This file contains code from all related objects that is required in
* order to encode syslog format. The core idea of putting all of this into
* a single file is that this makes it very straightforward to write
* encoders for different encodings, as all is in one place.
*/
/*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2016 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include "internal.h"
#include "liblognorm.h"
#include "enc.h"
static int
ln_addValue_Syslog(const char *value, es_str_t **str)
{
int r;
es_size_t i;
assert(str != NULL);
assert(*str != NULL);
assert(value != NULL);
for(i = 0; i < strlen(value); i++) {
switch(value[i]) {
case '\0':
es_addChar(str, '\\');
es_addChar(str, '0');
break;
case '\n':
es_addChar(str, '\\');
es_addChar(str, 'n');
break;
/* TODO : add rest of control characters here... */
case ',': /* comma is CEE-reserved for lists */
es_addChar(str, '\\');
es_addChar(str, ',');
break;
#if 0 /* alternative encoding for discussion */
case '^': /* CEE-reserved for lists */
es_addChar(str, '\\');
es_addChar(str, '^');
break;
#endif
/* at this layer ... do we need to think about transport
* encoding at all? Or simply leave it to the transport agent?
*/
case '\\': /* RFC5424 reserved */
es_addChar(str, '\\');
es_addChar(str, '\\');
break;
case ']': /* RFC5424 reserved */
es_addChar(str, '\\');
es_addChar(str, ']');
break;
case '\"': /* RFC5424 reserved */
es_addChar(str, '\\');
es_addChar(str, '\"');
break;
default:
es_addChar(str, value[i]);
break;
}
}
r = 0;
return r;
}
static int
ln_addField_Syslog(char *name, struct json_object *field, es_str_t **str)
{
int r;
const char *value;
int needComma = 0;
struct json_object *obj;
int i;
assert(field != NULL);
assert(str != NULL);
assert(*str != NULL);
CHKR(es_addBuf(str, name, strlen(name)));
CHKR(es_addBuf(str, "=\"", 2));
switch(json_object_get_type(field)) {
case json_type_array:
for (i = json_object_array_length(field) - 1; i >= 0; i--) {
if(needComma)
es_addChar(str, ',');
else
needComma = 1;
CHKN(obj = json_object_array_get_idx(field, i));
CHKN(value = json_object_get_string(obj));
CHKR(ln_addValue_Syslog(value, str));
}
break;
case json_type_string:
case json_type_int:
CHKN(value = json_object_get_string(field));
CHKR(ln_addValue_Syslog(value, str));
break;
case json_type_null:
case json_type_boolean:
case json_type_double:
case json_type_object:
CHKR(es_addBuf(str, "***unsupported type***", sizeof("***unsupported type***")-1));
break;
default:
CHKR(es_addBuf(str, "***OBJECT***", sizeof("***OBJECT***")-1));
}
CHKR(es_addChar(str, '\"'));
r = 0;
done:
return r;
}
static inline int
ln_addTags_Syslog(struct json_object *taglist, es_str_t **str)
{
int r = 0;
struct json_object *tagObj;
int needComma = 0;
const char *tagCstr;
int i;
assert(json_object_is_type(taglist, json_type_array));
CHKR(es_addBuf(str, " event.tags=\"", 13));
for (i = json_object_array_length(taglist) - 1; i >= 0; i--) {
if(needComma)
es_addChar(str, ',');
else
needComma = 1;
CHKN(tagObj = json_object_array_get_idx(taglist, i));
CHKN(tagCstr = json_object_get_string(tagObj));
CHKR(es_addBuf(str, (char*)tagCstr, strlen(tagCstr)));
}
es_addChar(str, '"');
done: return r;
}
int
ln_fmtEventToRFC5424(struct json_object *json, es_str_t **str)
{
int r = -1;
struct json_object *tags;
assert(json != NULL);
assert(json_object_is_type(json, json_type_object));
if((*str = es_newStr(256)) == NULL)
goto done;
es_addBuf(str, "[cee@115", 8);
if(json_object_object_get_ex(json, "event.tags", &tags)) {
CHKR(ln_addTags_Syslog(tags, str));
}
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
char *const name = (char*)json_object_iter_peek_name(&it);
if (strcmp(name, "event.tags")) {
es_addChar(str, ' ');
ln_addField_Syslog(name, json_object_iter_peek_value(&it), str);
}
json_object_iter_next(&it);
}
es_addChar(str, ']');
done:
return r;
}
liblognorm-2.0.6/src/v1_liblognorm.h 0000644 0001750 0001750 00000013076 13273030617 014327 0000000 0000000 /**
* @file liblognorm.h
* @brief The public liblognorm API.
*
* Functions other than those defined here MUST not be called by
* a liblognorm "user" application.
*
* This file is meant to be included by applications using liblognorm.
* For lognorm library files themselves, include "lognorm.h".
*//**
* @mainpage
* Liblognorm is an easy to use and fast samples-based log normalization
* library.
*
* It can be passed a stream of arbitrary log messages, one at a time, and for
* each message it will output well-defined name-value pairs and a set of
* tags describing the message.
*
* For further details, see it's initial announcement available at
* http://blog.gerhards.net/2010/10/introducing-liblognorm.html
*
* The public interface of this library is describe in liblognorm.h.
*
* Liblognorm fully supports Unicode. Like most Linux tools, it operates
* on UTF-8 natively, called "passive mode". This was decided because we
* so can keep the size of data structures small while still supporting
* all of the world's languages (actually more than when we did UCS-2).
*
* At the technical level, we can handle UTF-8 multibyte sequences transparently.
* Liblognorm needs to look at a few US-ASCII characters to do the
* sample base parsing (things to indicate fields), so this is no
* issue. Inside the parse tree, a multibyte sequence can simple be processed
* as if it were a sequence of different characters that make up a their
* own symbol each. In fact, this even allows for somewhat greater parsing
* speed.
*//*
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2013 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef V1_LIBLOGNORM_H_INCLUDED
#define V1_LIBLOGNORM_H_INCLUDED
#include "liblognorm.h"
/**
* Inherit control attributes from a library context.
*
* This does not copy the parse-tree, but does copy
* behaviour-controling attributes such as enableRegex.
*
* Just as with ln_initCtx, ln_exitCtx() must be called on a library
* context that is no longer needed.
*
* @return new library context or NULL if an error occured
*/
ln_ctx ln_v1_inherittedCtx(ln_ctx parent);
/**
* Reads a sample stored in buffer buf and creates a new ln_samp object
* out of it.
*
* @note
* It is the caller's responsibility to delete the newly
* created ln_samp object if it is no longer needed.
*
* @param[ctx] ctx current library context
* @param[buf] NULL terminated cstr containing the contents of the sample
* @return Returns zero on success, something else otherwise.
*/
int
ln_v1_loadSample(ln_ctx ctx, const char *buf);
/**
* Load a (log) sample file.
*
* The file must contain log samples in syntactically correct format. Samples are added
* to set already loaded in the current context. If there is a sample with duplicate
* semantics, this sample will be ignored. Most importantly, this can \b not be used
* to change tag assignments for a given sample.
*
* @param[in] ctx The library context to apply callback to.
* @param[in] file Name of file to be loaded.
*
* @return Returns zero on success, something else otherwise.
*/
int ln_v1_loadSamples(ln_ctx ctx, const char *file);
/**
* Normalize a message.
*
* This is the main library entry point. It is called with a message
* to normalize and will return a normalized in-memory representation
* of it.
*
* If an error occurs, the function returns -1. In that case, an
* in-memory event representation has been generated if event is
* non-NULL. In that case, the event contains further error details in
* normalized form.
*
* @note
* This function works on byte-counted strings and as such is able to
* process NUL bytes if they occur inside the message. On the other hand,
* this means the the correct messages size, \b excluding the NUL byte,
* must be provided.
*
* @param[in] ctx The library context to use.
* @param[in] str The message string (see note above).
* @param[in] strLen The length of the message in bytes.
* @param[out] json_p A new event record or NULL if an error occured. Must be
* destructed if no longer needed.
*
* @return Returns zero on success, something else otherwise.
*/
int ln_v1_normalize(ln_ctx ctx, const char *str, size_t strLen, struct json_object **json_p);
/**
* create a single sample.
*/
struct ln_v1_samp* ln_v1_sampCreate(ln_ctx __attribute__((unused)) ctx);
/* here we add some stuff from the compatibility layer. A separate include
* would be cleaner, but would potentially require changes all over the
* place. So doing it here is better. The respective replacement
* functions should usually be found under ./compat -- rgerhards, 2015-05-20
*/
#endif /* #ifndef LOGNORM_H_INCLUDED */
liblognorm-2.0.6/src/lognorm.h 0000644 0001750 0001750 00000006664 13273030617 013237 0000000 0000000 /**
* @file lognorm.h
* @brief Private data structures used by the liblognorm API.
*//*
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010 by Rainer Gerhards and Adiscon GmbH.
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_LOGNORM_HINCLUDED
#define LIBLOGNORM_LOGNORM_HINCLUDED
#include /* we need size_t */
#include "liblognorm.h"
#include "pdag.h"
#include "annot.h"
/* some limits */
#define MAX_FIELDNAME_LEN 1024
#define MAX_TYPENAME_LEN 1024
#define LN_ObjID_None 0xFEFE0001
#define LN_ObjID_CTX 0xFEFE0001
struct ln_type_pdag {
const char *name;
ln_pdag *pdag;
};
struct ln_ctx_s {
unsigned objID; /**< a magic number to prevent some memory addressing errors */
void (*dbgCB)(void *cookie, const char *msg, size_t lenMsg);
/**< user-provided debug output callback */
void *dbgCookie; /**< cookie to be passed to debug callback */
void (*errmsgCB)(void *cookie, const char *msg, size_t lenMsg);
/**< user-provided error message callback */
void *errmsgCookie; /**< cookie to be passed to error message callback */
ln_pdag *pdag; /**< parse dag being used by this context */
ln_annotSet *pas; /**< associated set of annotations */
unsigned nNodes; /**< number of nodes in our parse tree */
unsigned char debug; /**< boolean: are we in debug mode? */
es_str_t *rulePrefix; /**< work variable for loading rule bases
* this is the prefix string that will be prepended
* to all rules before they are submitted to tree
* building.
*/
unsigned opts; /**< specific options, see LN_CTXOPTS_* defines */
struct ln_type_pdag *type_pdags; /**< array of our type pdags */
int nTypes; /**< number of type pdags */
int version; /**< 1 or 2, depending on rulebase/algo version */
/* here follows stuff for the v1 subsystem -- do NOT make any changes
* down here. This is strictly read-only. May also be removed some time in
* the future.
*/
struct ln_ptree *ptree;
/* end old cruft */
/* things for config processing / error message during it */
int include_level; /**< 1 for main rulebase file, higher for include levels */
const char *conf_file; /**< currently open config file or NULL, if none */
unsigned int conf_ln_nbr; /**< current config file line number */
};
void ln_dbgprintf(ln_ctx ctx, const char *fmt, ...) __attribute__((format(printf, 2, 3)));
void ln_errprintf(ln_ctx ctx, const int eno, const char *fmt, ...) __attribute__((format(printf, 3, 4)));
#define LN_DBGPRINTF(ctx, ...) if(ctx->dbgCB != NULL) { ln_dbgprintf(ctx, __VA_ARGS__); }
//#define LN_DBGPRINTF(ctx, ...)
#endif /* #ifndef LIBLOGNORM_LOGNORM_HINCLUDED */
liblognorm-2.0.6/src/helpers.h 0000644 0001750 0001750 00000000511 13273030617 013205 0000000 0000000 /**
* @file pdag.c
* @brief Implementation of the parse dag object.
* @class ln_pdag pdag.h
*//*
* Copyright 2015 by Rainer Gerhards and Adiscon GmbH.
*
* Released under ASL 2.0.
*/
#ifndef LIBLOGNORM_HELPERS_H
#define LIBLOGNORM_HELPERS_H
static inline int myisdigit(char c)
{
return (c >= '0' && c <= '9');
}
#endif
liblognorm-2.0.6/src/Makefile.am 0000644 0001750 0001750 00000003642 13273030617 013436 0000000 0000000 # Uncomment for debugging
DEBUG = -g
PTHREADS_CFLAGS = -pthread
#CFLAGS += $(DEBUG)
# we need to clean the normalizer up once we have reached a decent
# milestone (latest at initial release!)
bin_PROGRAMS = lognormalizer
lognormalizer_SOURCES = lognormalizer.c
lognormalizer_CPPFLAGS = -I$(top_srcdir) $(WARN_CFLAGS) $(JSON_C_CFLAGS) $(LIBESTR_CFLAGS)
lognormalizer_LDADD = $(JSON_C_LIBS) $(LIBLOGNORM_LIBS) $(LIBESTR_LIBS) ../compat/compat.la
lognormalizer_DEPENDENCIES = liblognorm.la
check_PROGRAMS = ln_test
ln_test_SOURCES = $(lognormalizer_SOURCES)
ln_test_CPPFLAGS = $(lognormalizer_CPPFLAGS)
ln_test_LDADD = $(lognormalizer_LDADD)
ln_test_DEPENDENCIES = $(lognormalizer_DEPENDENCIES)
ln_test_LDFLAGS = -no-install
lib_LTLIBRARIES = liblognorm.la
liblognorm_la_SOURCES = \
liblognorm.c \
pdag.c \
annot.c \
samp.c \
lognorm.c \
parser.c \
enc_syslog.c \
enc_csv.c \
enc_xml.c
# Users violently requested that v2 shall be able to understand v1
# rulebases. As both are very very different, we now include the
# full v1 engine for this purpose. This here is what does this.
# see also: https://github.com/rsyslog/liblognorm/issues/103
liblognorm_la_SOURCES += \
v1_liblognorm.c \
v1_parser.c \
v1_ptree.c \
v1_samp.c
liblognorm_la_CPPFLAGS = $(JSON_C_CFLAGS) $(WARN_CFLAGS) $(LIBESTR_CFLAGS) $(PCRE_CFLAGS)
liblognorm_la_LIBADD = $(rt_libs) $(JSON_C_LIBS) $(LIBESTR_LIBS) $(PCRE_LIBS) -lestr
# info on version-info:
# http://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
# Note: v2 now starts at version 5, as v1 previously also had 4
liblognorm_la_LDFLAGS = -version-info 6:0:1
EXTRA_DIST = \
internal.h \
liblognorm.h \
lognorm.h \
pdag.h \
annot.h \
samp.h \
enc.h \
parser.h \
helpers.h
# and now the old cruft:
EXTRA_DIST += \
v1_liblognorm.h \
v1_parser.h \
v1_samp.h \
v1_ptree.h
include_HEADERS = liblognorm.h samp.h lognorm.h pdag.h annot.h enc.h parser.h lognorm-features.h
liblognorm-2.0.6/src/internal.h 0000644 0001750 0001750 00000007407 13370250151 013365 0000000 0000000 /**
* @file internal.h
* @brief Internal things just needed for building the library, but
* not to be installed.
*//**
* @mainpage
* Liblognorm is an easy to use and fast samples-based log normalization
* library.
*
* It can be passed a stream of arbitrary log messages, one at a time, and for
* each message it will output well-defined name-value pairs and a set of
* tags describing the message.
*
* For further details, see it's initial announcement available at
* http://blog.gerhards.net/2010/10/introducing-liblognorm.html
*
* The public interface of this library is describe in liblognorm.h.
*
* Liblognorm fully supports Unicode. Like most Linux tools, it operates
* on UTF-8 natively, called "passive mode". This was decided because we
* so can keep the size of data structures small while still supporting
* all of the world's languages (actually more than when we did UCS-2).
*
* At the technical level, we can handle UTF-8 multibyte sequences transparently.
* Liblognorm needs to look at a few US-ASCII characters to do the
* sample base parsing (things to indicate fields), so this is no
* issue. Inside the parse tree, a multibyte sequence can simple be processed
* as if it were a sequence of different characters that make up a their
* own symbol each. In fact, this even allows for somewhat greater parsing
* speed.
*//*
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2018 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef INTERNAL_H_INCLUDED
#define INTERNAL_H_INCLUDED
/* the jump-misses-init gcc warning is overdoing when we jump to the
* exit of a function to get proper finalization. So let's disable it.
* rgerhards, 2018-04-25
*/
#pragma GCC diagnostic ignored "-Wjump-misses-init"
#include "liblognorm.h"
#include
/* we need to turn off this warning, as it also comes up in C99 mode, which
* we use.
*/
#ifndef _AIX
#pragma GCC diagnostic ignored "-Wdeclaration-after-statement"
#endif
/* support for simple error checking */
#define CHKR(x) \
if((r = (x)) != 0) goto done
#define CHKN(x) \
if((x) == NULL) { \
r = LN_NOMEM; \
goto done; \
}
#define FAIL(e) {r = (e); goto done;}
static inline char* ln_es_str2cstr(es_str_t **str)
{
int r = -1;
char *buf;
if (es_strlen(*str) == (*str)->lenBuf) {
CHKR(es_extendBuf(str, 1));
}
CHKN(buf = (char*)es_getBufAddr(*str));
buf[es_strlen(*str)] = '\0';
return buf;
done:
return NULL;
}
const char * ln_DataForDisplayCharTo(__attribute__((unused)) ln_ctx ctx, void *const pdata);
const char * ln_DataForDisplayLiteral(__attribute__((unused)) ln_ctx ctx, void *const pdata);
const char * ln_JsonConfLiteral(__attribute__((unused)) ln_ctx ctx, void *const pdata);
/* here we add some stuff from the compatibility layer */
#ifndef HAVE_STRNDUP
char * strndup(const char *s, size_t n);
#endif
#endif /* #ifndef INTERNAL_H_INCLUDED */
liblognorm-2.0.6/src/v1_liblognorm.c 0000644 0001750 0001750 00000004737 13273030617 014326 0000000 0000000 /* This file implements the liblognorm API.
* See header file for descriptions.
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2013-2015 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include "liblognorm.h"
#include "v1_liblognorm.h"
#include "v1_ptree.h"
#include "lognorm.h"
#include "annot.h"
#include "v1_samp.h"
#define ERR_ABORT {r = 1; goto done; }
#define CHECK_CTX \
if(ctx->objID != LN_ObjID_CTX) { \
r = -1; \
goto done; \
}
ln_ctx
ln_v1_inherittedCtx(ln_ctx parent)
{
ln_ctx child = ln_initCtx();
if (child != NULL) {
child->opts = parent->opts;
child->dbgCB = parent->dbgCB;
child->dbgCookie = parent->dbgCookie;
child->version = parent->version;
child->ptree = ln_newPTree(child, NULL);
}
return child;
}
int
ln_v1_loadSample(ln_ctx ctx, const char *buf)
{
// Something bad happened - no new sample
if (ln_v1_processSamp(ctx, buf, strlen(buf)) == NULL) {
return 1;
}
return 0;
}
int
ln_v1_loadSamples(ln_ctx ctx, const char *file)
{
int r = 0;
FILE *repo;
struct ln_v1_samp *samp;
int isEof = 0;
char *fn_to_free = NULL;
CHECK_CTX;
ctx->conf_file = fn_to_free = strdup(file);
ctx->conf_ln_nbr = 0;
if(file == NULL) ERR_ABORT;
if((repo = fopen(file, "r")) == NULL) {
ln_errprintf(ctx, errno, "cannot open file %s", file);
ERR_ABORT;
}
while(!isEof) {
if((samp = ln_v1_sampRead(ctx, repo, &isEof)) == NULL) {
/* TODO: what exactly to do? */
}
}
fclose(repo);
ctx->conf_file = NULL;
done:
free((void*)fn_to_free);
return r;
}
liblognorm-2.0.6/src/liblognorm.c 0000644 0001750 0001750 00000007604 13273030617 013714 0000000 0000000 /* This file implements the liblognorm API.
* See header file for descriptions.
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2013 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include "liblognorm.h"
#include "lognorm.h"
#include "annot.h"
#include "samp.h"
#include "v1_liblognorm.h"
#include "v1_ptree.h"
#define CHECK_CTX \
if(ctx->objID != LN_ObjID_CTX) { \
r = -1; \
goto done; \
}
const char *
ln_version(void)
{
return VERSION;
}
int
ln_hasAdvancedStats(void)
{
#ifdef ADVANCED_STATS
return 1;
#else
return 0;
#endif
}
ln_ctx
ln_initCtx(void)
{
ln_ctx ctx;
if((ctx = calloc(1, sizeof(struct ln_ctx_s))) == NULL)
goto done;
#ifdef HAVE_JSON_GLOBAL_SET_STRING_HASH
json_global_set_string_hash(JSON_C_STR_HASH_PERLLIKE);
#endif
#ifdef HAVE_JSON_GLOBAL_SET_PRINTBUF_INITIAL_SIZE
json_global_set_printbuf_initial_size(2048);
#endif
ctx->objID = LN_ObjID_CTX;
ctx->dbgCB = NULL;
ctx->opts = 0;
/* we add an root for the empty word, this simplifies parse
* dag handling.
*/
if((ctx->pdag = ln_newPDAG(ctx)) == NULL) {
free(ctx);
ctx = NULL;
goto done;
}
/* same for annotation set */
if((ctx->pas = ln_newAnnotSet(ctx)) == NULL) {
ln_pdagDelete(ctx->pdag);
free(ctx);
ctx = NULL;
goto done;
}
done:
return ctx;
}
void
ln_setCtxOpts(ln_ctx ctx, const unsigned opts) {
ctx->opts |= opts;
}
int
ln_exitCtx(ln_ctx ctx)
{
int r = 0;
CHECK_CTX;
ln_dbgprintf(ctx, "exitCtx %p", ctx);
ctx->objID = LN_ObjID_None; /* prevent double free */
/* support for old cruft */
if(ctx->ptree != NULL)
ln_deletePTree(ctx->ptree);
/* end support for old cruft */
if(ctx->pdag != NULL)
ln_pdagDelete(ctx->pdag);
for(int i = 0 ; i < ctx->nTypes ; ++i) {
free((void*)ctx->type_pdags[i].name);
ln_pdagDelete(ctx->type_pdags[i].pdag);
}
free(ctx->type_pdags);
if(ctx->rulePrefix != NULL)
es_deleteStr(ctx->rulePrefix);
if(ctx->pas != NULL)
ln_deleteAnnotSet(ctx->pas);
free(ctx);
done:
return r;
}
int
ln_setDebugCB(ln_ctx ctx, void (*cb)(void*, const char*, size_t), void *cookie)
{
int r = 0;
CHECK_CTX;
ctx->dbgCB = cb;
ctx->dbgCookie = cookie;
done:
return r;
}
int
ln_setErrMsgCB(ln_ctx ctx, void (*cb)(void*, const char*, size_t), void *cookie)
{
int r = 0;
CHECK_CTX;
ctx->errmsgCB = cb;
ctx->errmsgCookie = cookie;
done:
return r;
}
int
ln_loadSamples(ln_ctx ctx, const char *file)
{
int r = 0;
const char *tofree;
CHECK_CTX;
ctx->conf_file = tofree = strdup(file);
ctx->conf_ln_nbr = 0;
++ctx->include_level;
r = ln_sampLoad(ctx, file);
--ctx->include_level;
free((void*)tofree);
ctx->conf_file = NULL;
done:
return r;
}
int
ln_loadSamplesFromString(ln_ctx ctx, const char *string)
{
int r = 0;
const char *tofree;
CHECK_CTX;
ctx->conf_file = tofree = strdup("--NO-FILE--");
ctx->conf_ln_nbr = 0;
++ctx->include_level;
r = ln_sampLoadFromString(ctx, string);
--ctx->include_level;
free((void*)tofree);
ctx->conf_file = NULL;
done:
return r;
}
liblognorm-2.0.6/src/lognormalizer.c 0000644 0001750 0001750 00000033066 13370250152 014431 0000000 0000000 /**
* @file normalizer.c
* @brief A small tool to normalize data.
*
* This is the most basic example demonstrating how to use liblognorm.
* It loads log samples from the files specified on the command line,
* reads to-be-normalized data from stdin and writes the normalized
* form to stdout. Besides being an example, it also carries out useful
* processing.
*
* @author Rainer Gerhards
*
*//*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2016 by Rainer Gerhards and Adiscon GmbH.
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#ifdef _AIX
#include
#else
#include
#endif
#include
#include "liblognorm.h"
#include "lognorm.h"
#include "enc.h"
/* we need to turn off this warning, as it also comes up in C99 mode, which
* we use.
*/
#pragma GCC diagnostic ignored "-Wdeclaration-after-statement"
static ln_ctx ctx;
static int verbose = 0;
#define OUTPUT_PARSED_RECS 0x01
#define OUTPUT_UNPARSED_RECS 0x02
static int recOutput = OUTPUT_PARSED_RECS | OUTPUT_UNPARSED_RECS;
/**< controls which records to output */
static int outputSummaryLine = 0;
static int outputNbrUnparsed = 0;
static int addErrLineNbr = 0; /**< add line number info to unparsed events */
static int flatTags = 0; /**< print event.tags in JSON? */
static FILE *fpDOT;
static es_str_t *encFmt = NULL; /**< a format string for encoder use */
static es_str_t *mandatoryTag = NULL; /**< tag which must be given so that mesg will
be output. NULL=all */
static enum { f_syslog, f_json, f_xml, f_csv, f_raw } outfmt = f_json;
static void
errCallBack(void __attribute__((unused)) *cookie, const char *msg,
size_t __attribute__((unused)) lenMsg)
{
fprintf(stderr, "liblognorm error: %s\n", msg);
}
static void
dbgCallBack(void __attribute__((unused)) *cookie, const char *msg,
size_t __attribute__((unused)) lenMsg)
{
fprintf(stderr, "liblognorm: %s\n", msg);
}
static void
complain(const char *errmsg)
{
fprintf(stderr, "%s\n", errmsg);
}
/* rawmsg is, as the name says, the raw message, in case we have
* "raw" formatter requested.
*/
static void
outputEvent(struct json_object *json, const char *const rawmsg)
{
char *cstr = NULL;
es_str_t *str = NULL;
if(outfmt == f_raw) {
printf("%s\n", rawmsg);
return;
}
switch(outfmt) {
case f_json:
if(!flatTags) {
json_object_object_del(json, "event.tags");
}
cstr = (char*)json_object_to_json_string(json);
break;
case f_syslog:
ln_fmtEventToRFC5424(json, &str);
break;
case f_xml:
ln_fmtEventToXML(json, &str);
break;
case f_csv:
ln_fmtEventToCSV(json, &str, encFmt);
break;
case f_raw:
fprintf(stderr, "program error: f_raw should not occur "
"here (file %s, line %d)\n", __FILE__, __LINE__);
abort();
break;
default:
fprintf(stderr, "program error: default case should not occur "
"here (file %s, line %d)\n", __FILE__, __LINE__);
abort();
break;
}
if (str != NULL)
cstr = es_str2cstr(str, NULL);
if(verbose > 0) fprintf(stderr, "normalized: '%s'\n", cstr);
printf("%s\n", cstr);
if (str != NULL)
free(cstr);
es_deleteStr(str);
}
/* test if the tag exists */
static int
eventHasTag(struct json_object *json, const char *tag)
{
struct json_object *tagbucket, *tagObj;
int i;
const char *tagCstr;
if (tag == NULL)
return 1;
if (json_object_object_get_ex(json, "event.tags", &tagbucket)) {
if (json_object_get_type(tagbucket) == json_type_array) {
for (i = json_object_array_length(tagbucket) - 1; i >= 0; i--) {
tagObj = json_object_array_get_idx(tagbucket, i);
tagCstr = json_object_get_string(tagObj);
if (!strcmp(tag, tagCstr))
return 1;
}
}
}
if (verbose > 1)
printf("Mandatory tag '%s' has not been found\n", tag);
return 0;
}
static void
amendLineNbr(json_object *const json, const int line_nbr)
{
if(addErrLineNbr) {
struct json_object *jval;
jval = json_object_new_int(line_nbr);
json_object_object_add(json, "lognormalizer.line_nbr", jval);
}
}
#define DEFAULT_LINE_SIZE (10 * 1024)
static char *
read_line(FILE *fp)
{
size_t line_capacity = DEFAULT_LINE_SIZE;
char *line = NULL;
size_t line_len = 0;
int ch = 0;
do {
ch = fgetc(fp);
if (ch == EOF) break;
if (line == NULL) {
line = malloc(line_capacity);
} else if (line_len == line_capacity) {
line_capacity *= 2;
line = realloc(line, line_capacity);
}
if (line == NULL) {
fprintf(stderr, "Couldn't allocate working-buffer for log-line\n");
return NULL;
}
line[line_len++] = ch;
} while(ch != '\n');
if (line != NULL) {
line[--line_len] = '\0';
if(line_len > 0 && line[line_len - 1] == '\r')
line[--line_len] = '\0';
}
return line;
}
/* normalize input data
*/
static void
normalize(void)
{
FILE *fp = stdin;
char *line = NULL;
struct json_object *json = NULL;
long long unsigned numParsed = 0;
long long unsigned numUnparsed = 0;
long long unsigned numWrongTag = 0;
char *mandatoryTagCstr = NULL;
int line_nbr = 0; /* must be int to keep compatible with older json-c */
if (mandatoryTag != NULL) {
mandatoryTagCstr = es_str2cstr(mandatoryTag, NULL);
}
while((line = read_line(fp)) != NULL) {
++line_nbr;
if(verbose > 0) fprintf(stderr, "To normalize: '%s'\n", line);
ln_normalize(ctx, line, strlen(line), &json);
if(json != NULL) {
if(eventHasTag(json, mandatoryTagCstr)) {
struct json_object *dummy;
const int parsed = !json_object_object_get_ex(json,
"unparsed-data", &dummy);
if(parsed) {
numParsed++;
if(recOutput & OUTPUT_PARSED_RECS) {
outputEvent(json, line);
}
} else {
numUnparsed++;
amendLineNbr(json, line_nbr);
if(recOutput & OUTPUT_UNPARSED_RECS) {
outputEvent(json, line);
}
}
} else {
numWrongTag++;
}
json_object_put(json);
json = NULL;
}
free(line);
}
if(outputNbrUnparsed && numUnparsed > 0)
fprintf(stderr, "%llu unparsable entries\n", numUnparsed);
if(numWrongTag > 0)
fprintf(stderr, "%llu entries with wrong tag dropped\n", numWrongTag);
if(outputSummaryLine) {
fprintf(stderr, "%llu records processed, %llu parsed, %llu unparsed\n",
numParsed+numUnparsed, numParsed, numUnparsed);
}
free(mandatoryTagCstr);
}
/**
* Generate a command file for the GNU DOT tools.
*/
static void
genDOT(void)
{
es_str_t *str;
str = es_newStr(1024);
ln_genDotPDAGGraph(ctx->pdag, &str);
fwrite(es_getBufAddr(str), 1, es_strlen(str), fpDOT);
}
static
void printVersion(void)
{
fprintf(stderr, "lognormalizer version: " VERSION "\n");
fprintf(stderr, "liblognorm version: %s\n", ln_version());
fprintf(stderr, "\tadvanced stats: %s\n",
ln_hasAdvancedStats() ? "available" : "not available");
}
static void
handle_generic_option(const char* opt) {
if (strcmp("allowRegex", opt) == 0) {
ln_setCtxOpts(ctx, LN_CTXOPT_ALLOW_REGEX);
} else if (strcmp("addExecPath", opt) == 0) {
ln_setCtxOpts(ctx, LN_CTXOPT_ADD_EXEC_PATH);
} else if (strcmp("addOriginalMsg", opt) == 0) {
ln_setCtxOpts(ctx, LN_CTXOPT_ADD_ORIGINALMSG);
} else if (strcmp("addRule", opt) == 0) {
ln_setCtxOpts(ctx, LN_CTXOPT_ADD_RULE);
} else if (strcmp("addRuleLocation", opt) == 0) {
ln_setCtxOpts(ctx, LN_CTXOPT_ADD_RULE_LOCATION);
} else {
fprintf(stderr, "invalid -o option '%s'\n", opt);
exit(1);
}
}
static void usage(void)
{
fprintf(stderr,
"Options:\n"
" -r Rulebase to use. This is required option\n"
" -H print summary line (nbr of msgs Handled)\n"
" -U print number of unparsed messages (only if non-zero)\n"
" -e\n"
" Change output format. By default, json is used\n"
" Raw is exactly like the input. It is useful in combination\n"
" with -p/-P options to extract known good/bad messages\n"
" -E Encoder-specific format (used for CSV, read docs)\n"
" -T Include 'event.tags' in JSON format\n"
" -oallowRegex Allow regexp matching (read docs about performance penalty)\n"
" -oaddRule Add a mockup of the matching rule.\n"
" -oaddRuleLocation Add location of matching rule to metadata\n"
" -oaddExecPath Add exec_path attribute to output\n"
" -oaddOriginalMsg Always add original message to output, not just in error case\n"
" -p Print back only if the message has been parsed succesfully\n"
" -P Print back only if the message has NOT been parsed succesfully\n"
" -L Add source file line number information to unparsed line output\n"
" -t Print back only messages matching the tag\n"
" -v Print debug. When used 3 times, prints parse DAG\n"
" -V Print version information\n"
" -d Print DOT file to stdout and exit\n"
" -d Save DOT file to the filename\n"
" -s Print parse dag statistics and exit\n"
" -S Print extended parse dag statistics and exit (includes -s)\n"
" -x Print statistics as dot file (called only)\n"
"\n"
);
}
int main(int argc, char *argv[])
{
int opt;
char *repository = NULL;
int usedRB = 0; /* 0=no rule; 1=rule from rulebase; 2=rule from string */
int ret = 0;
FILE *fpStats = NULL;
FILE *fpStatsDOT = NULL;
int extendedStats = 0;
if((ctx = ln_initCtx()) == NULL) {
complain("Could not initialize liblognorm context");
ret = 1;
goto exit;
}
while((opt = getopt(argc, argv, "d:s:S:e:r:R:E:vVpPt:To:hHULx:")) != -1) {
switch (opt) {
case 'V':
printVersion();
exit(1);
break;
case 'd': /* generate DOT file */
if(!strcmp(optarg, "")) {
fpDOT = stdout;
} else {
if((fpDOT = fopen(optarg, "w")) == NULL) {
perror(optarg);
complain("Cannot open DOT file");
ret = 1;
goto exit;
}
}
break;
case 'x': /* generate statistics DOT file */
if(!strcmp(optarg, "")) {
fpStatsDOT = stdout;
} else {
if((fpStatsDOT = fopen(optarg, "w")) == NULL) {
perror(optarg);
complain("Cannot open statistics DOT file");
ret = 1;
goto exit;
}
}
break;
case 'S': /* generate pdag statistic file */
extendedStats = 1;
/* INTENTIONALLY NO BREAK! - KEEP order! */
/*FALLTHROUGH*/
case 's': /* generate pdag statistic file */
if(!strcmp(optarg, "-")) {
fpStats = stdout;
} else {
if((fpStats = fopen(optarg, "w")) == NULL) {
perror(optarg);
complain("Cannot open parser statistics file");
ret = 1;
goto exit;
}
}
break;
case 'v':
verbose++;
break;
case 'E': /* encoder-specific format string (will be validated by encoder) */
encFmt = es_newStrFromCStr(optarg, strlen(optarg));
break;
case 'p':
recOutput = OUTPUT_PARSED_RECS;
break;
case 'P':
recOutput = OUTPUT_UNPARSED_RECS;
break;
case 'H':
outputSummaryLine = 1;
break;
case 'U':
outputNbrUnparsed = 1;
break;
case 'L':
addErrLineNbr = 1;
break;
case 'T':
flatTags = 1;
break;
case 'e': /* encoder to use */
if(!strcmp(optarg, "json")) {
outfmt = f_json;
} else if(!strcmp(optarg, "xml")) {
outfmt = f_xml;
} else if(!strcmp(optarg, "cee-syslog")) {
outfmt = f_syslog;
} else if(!strcmp(optarg, "csv")) {
outfmt = f_csv;
} else if(!strcmp(optarg, "raw")) {
outfmt = f_raw;
}
break;
case 'r': /* rule base to use */
if(usedRB != 2) {
repository = optarg;
usedRB = 1;
} else {
usedRB = -1;
}
break;
case 'R':
if(usedRB != 1) {
repository = optarg;
usedRB = 2;
} else {
usedRB = -1;
}
break;
case 't': /* if given, only messages tagged with the argument
are output */
mandatoryTag = es_newStrFromCStr(optarg, strlen(optarg));
break;
case 'o':
handle_generic_option(optarg);
break;
case 'h':
default:
usage();
ret = 1;
goto exit;
break;
}
}
if(repository == NULL) {
complain("Samples repository or String must be given (-r or -R)");
ret = 1;
goto exit;
}
if(usedRB == -1) {
complain("Only use one rulebase (-r or -R)");
ret = 1;
goto exit;
}
ln_setErrMsgCB(ctx, errCallBack, NULL);
if(verbose) {
ln_setDebugCB(ctx, dbgCallBack, NULL);
ln_enableDebug(ctx, 1);
}
if(usedRB == 1) {
if(ln_loadSamples(ctx, repository)) {
fprintf(stderr, "fatal error: cannot load rulebase\n");
exit(1);
}
} else if(usedRB == 2) {
if(ln_loadSamplesFromString(ctx, repository)) {
fprintf(stderr, "fatal error: cannot load rule from String\n");
exit(1);
}
}
if(verbose > 0)
fprintf(stderr, "number of tree nodes: %d\n", ctx->nNodes);
if(fpDOT != NULL) {
genDOT();
ret=1;
goto exit;
}
if(verbose > 2) ln_displayPDAG(ctx);
normalize();
if(fpStats != NULL) {
ln_fullPdagStats(ctx, fpStats, extendedStats);
}
if(fpStatsDOT != NULL) {
ln_fullPDagStatsDOT(ctx, fpStatsDOT);
}
exit:
if (ctx) ln_exitCtx(ctx);
if (encFmt != NULL)
free(encFmt);
return ret;
}
liblognorm-2.0.6/src/v1_parser.c 0000644 0001750 0001750 00000244760 13273030617 013460 0000000 0000000 /*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2018 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include
#include
#include "v1_liblognorm.h"
#include "internal.h"
#include "lognorm.h"
#include "v1_parser.h"
#include "v1_samp.h"
#ifdef FEATURE_REGEXP
#include
#include
#endif
/* some helpers */
static inline int
hParseInt(const unsigned char **buf, size_t *lenBuf)
{
const unsigned char *p = *buf;
size_t len = *lenBuf;
int i = 0;
while(len > 0 && isdigit(*p)) {
i = i * 10 + *p - '0';
++p;
--len;
}
*buf = p;
*lenBuf = len;
return i;
}
/* parsers for the primitive types
*
* All parsers receive
*
* @param[in] str the to-be-parsed string
* @param[in] strLen length of the to-be-parsed string
* @param[in] offs an offset into the string
* @param[in] node fieldlist with additional data; for simple
* parsers, this sets variable "ed", which just is
* string data.
* @param[out] parsed bytes
* @param[out] value ptr to json object containing parsed data
* (can be unused, but if used *value MUST be NULL on entry)
*
* They will try to parse out "their" object from the string. If they
* succeed, they:
*
* return 0 on success and LN_WRONGPARSER if this parser could
* not successfully parse (but all went well otherwise) and something
* else in case of an error.
*/
#define PARSER(ParserName) \
int ln_parse##ParserName(const char *const str, const size_t strLen, \
size_t *const offs, \
__attribute__((unused)) const ln_fieldList_t *node, \
size_t *parsed, \
__attribute__((unused)) struct json_object **value) \
{ \
int r = LN_WRONGPARSER; \
__attribute__((unused)) es_str_t *ed = node->data; \
*parsed = 0;
#define FAILParser \
goto parserdone; /* suppress warnings */ \
parserdone: \
r = 0; \
goto done; /* suppress warnings */ \
done:
#define ENDFailParser \
return r; \
}
/**
* Utilities to allow constructors of complex parser's to
* easily process field-declaration arguments.
*/
#define FIELD_ARG_SEPERATOR ":"
#define MAX_FIELD_ARGS 10
struct pcons_args_s {
int argc;
char *argv[MAX_FIELD_ARGS];
};
typedef struct pcons_args_s pcons_args_t;
static void free_pcons_args(pcons_args_t** dat_p) {
pcons_args_t *dat = *dat_p;
*dat_p = NULL;
if (! dat) {
return;
}
while((--(dat->argc)) >= 0) {
if (dat->argv[dat->argc] != NULL) free(dat->argv[dat->argc]);
}
free(dat);
}
static pcons_args_t* pcons_args(es_str_t *args, int expected_argc) {
pcons_args_t *dat = NULL;
char* orig_str = NULL;
if ((dat = malloc(sizeof(pcons_args_t))) == NULL) goto fail;
dat->argc = 0;
if (args != NULL) {
orig_str = es_str2cstr(args, NULL);
char *str = orig_str;
while (dat->argc < MAX_FIELD_ARGS) {
int i = dat->argc++;
char *next = (dat->argc == expected_argc) ? NULL : strstr(str, FIELD_ARG_SEPERATOR);
if (next == NULL) {
if ((dat->argv[i] = strdup(str)) == NULL) goto fail;
break;
} else {
if ((dat->argv[i] = strndup(str, next - str)) == NULL) goto fail;
next++;
}
str = next;
}
}
goto done;
fail:
if (dat != NULL) free_pcons_args(&dat);
done:
if (orig_str != NULL) free(orig_str);
return dat;
}
static const char* pcons_arg(pcons_args_t *dat, int i, const char* dflt_val) {
if (i >= dat->argc) return dflt_val;
return dat->argv[i];
}
static char* pcons_arg_copy(pcons_args_t *dat, int i, const char* dflt_val) {
const char *str = pcons_arg(dat, i, dflt_val);
return (str == NULL) ? NULL : strdup(str);
}
static void pcons_unescape_arg(pcons_args_t *dat, int i) {
char *arg = (char*) pcons_arg(dat, i, NULL);
es_str_t *str = NULL;
if (arg != NULL) {
str = es_newStrFromCStr(arg, strlen(arg));
if (str != NULL) {
es_unescapeStr(str);
free(arg);
dat->argv[i] = es_str2cstr(str, NULL);
es_deleteStr(str);
}
}
}
/**
* Parse a TIMESTAMP as specified in RFC5424 (subset of RFC3339).
*/
PARSER(RFC5424Date)
const unsigned char *pszTS;
/* variables to temporarily hold time information while we parse */
__attribute__((unused)) int year;
int month;
int day;
int hour; /* 24 hour clock */
int minute;
int second;
__attribute__((unused)) int secfrac; /* fractional seconds (must be 32 bit!) */
__attribute__((unused)) int secfracPrecision;
int OffsetHour; /* UTC offset in hours */
int OffsetMinute; /* UTC offset in minutes */
size_t len;
size_t orglen;
/* end variables to temporarily hold time information while we parse */
pszTS = (unsigned char*) str + *offs;
len = orglen = strLen - *offs;
year = hParseInt(&pszTS, &len);
/* We take the liberty to accept slightly malformed timestamps e.g. in
* the format of 2003-9-1T1:0:0. */
if(len == 0 || *pszTS++ != '-') goto done;
--len;
month = hParseInt(&pszTS, &len);
if(month < 1 || month > 12) goto done;
if(len == 0 || *pszTS++ != '-')
goto done;
--len;
day = hParseInt(&pszTS, &len);
if(day < 1 || day > 31) goto done;
if(len == 0 || *pszTS++ != 'T') goto done;
--len;
hour = hParseInt(&pszTS, &len);
if(hour < 0 || hour > 23) goto done;
if(len == 0 || *pszTS++ != ':')
goto done;
--len;
minute = hParseInt(&pszTS, &len);
if(minute < 0 || minute > 59) goto done;
if(len == 0 || *pszTS++ != ':') goto done;
--len;
second = hParseInt(&pszTS, &len);
if(second < 0 || second > 60) goto done;
/* Now let's see if we have secfrac */
if(len > 0 && *pszTS == '.') {
--len;
const unsigned char *pszStart = ++pszTS;
secfrac = hParseInt(&pszTS, &len);
secfracPrecision = (int) (pszTS - pszStart);
} else {
secfracPrecision = 0;
secfrac = 0;
}
/* check the timezone */
if(len == 0) goto done;
if(*pszTS == 'Z') {
--len;
pszTS++; /* eat Z */
} else if((*pszTS == '+') || (*pszTS == '-')) {
--len;
pszTS++;
OffsetHour = hParseInt(&pszTS, &len);
if(OffsetHour < 0 || OffsetHour > 23)
goto done;
if(len == 0 || *pszTS++ != ':')
goto done;
--len;
OffsetMinute = hParseInt(&pszTS, &len);
if(OffsetMinute < 0 || OffsetMinute > 59)
goto done;
} else {
/* there MUST be TZ information */
goto done;
}
if(len > 0) {
if(*pszTS != ' ') /* if it is not a space, it can not be a "good" time */
goto done;
}
/* we had success, so update parse pointer */
*parsed = orglen - len;
r = 0; /* success */
done:
return r;
}
/**
* Parse a RFC3164 Date.
*/
PARSER(RFC3164Date)
const unsigned char *p;
size_t len, orglen;
/* variables to temporarily hold time information while we parse */
__attribute__((unused)) int month;
int day;
#if 0 /* TODO: why does this still exist? */
int year = 0; /* 0 means no year provided */
#endif
int hour; /* 24 hour clock */
int minute;
int second;
p = (unsigned char*) str + *offs;
orglen = len = strLen - *offs;
/* If we look at the month (Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec),
* we may see the following character sequences occur:
*
* J(an/u(n/l)), Feb, Ma(r/y), A(pr/ug), Sep, Oct, Nov, Dec
*
* We will use this for parsing, as it probably is the
* fastest way to parse it.
*/
if(len < 3)
goto done;
switch(*p++)
{
case 'j':
case 'J':
if(*p == 'a' || *p == 'A') {
++p;
if(*p == 'n' || *p == 'N') {
++p;
month = 1;
} else
goto done;
} else if(*p == 'u' || *p == 'U') {
++p;
if(*p == 'n' || *p == 'N') {
++p;
month = 6;
} else if(*p == 'l' || *p == 'L') {
++p;
month = 7;
} else
goto done;
} else
goto done;
break;
case 'f':
case 'F':
if(*p == 'e' || *p == 'E') {
++p;
if(*p == 'b' || *p == 'B') {
++p;
month = 2;
} else
goto done;
} else
goto done;
break;
case 'm':
case 'M':
if(*p == 'a' || *p == 'A') {
++p;
if(*p == 'r' || *p == 'R') {
++p;
month = 3;
} else if(*p == 'y' || *p == 'Y') {
++p;
month = 5;
} else
goto done;
} else
goto done;
break;
case 'a':
case 'A':
if(*p == 'p' || *p == 'P') {
++p;
if(*p == 'r' || *p == 'R') {
++p;
month = 4;
} else
goto done;
} else if(*p == 'u' || *p == 'U') {
++p;
if(*p == 'g' || *p == 'G') {
++p;
month = 8;
} else
goto done;
} else
goto done;
break;
case 's':
case 'S':
if(*p == 'e' || *p == 'E') {
++p;
if(*p == 'p' || *p == 'P') {
++p;
month = 9;
} else
goto done;
} else
goto done;
break;
case 'o':
case 'O':
if(*p == 'c' || *p == 'C') {
++p;
if(*p == 't' || *p == 'T') {
++p;
month = 10;
} else
goto done;
} else
goto done;
break;
case 'n':
case 'N':
if(*p == 'o' || *p == 'O') {
++p;
if(*p == 'v' || *p == 'V') {
++p;
month = 11;
} else
goto done;
} else
goto done;
break;
case 'd':
case 'D':
if(*p == 'e' || *p == 'E') {
++p;
if(*p == 'c' || *p == 'C') {
++p;
month = 12;
} else
goto done;
} else
goto done;
break;
default:
goto done;
}
len -= 3;
/* done month */
if(len == 0 || *p++ != ' ')
goto done;
--len;
/* we accept a slightly malformed timestamp with one-digit days. */
if(*p == ' ') {
--len;
++p;
}
day = hParseInt(&p, &len);
if(day < 1 || day > 31)
goto done;
if(len == 0 || *p++ != ' ')
goto done;
--len;
/* time part */
hour = hParseInt(&p, &len);
if(hour > 1970 && hour < 2100) {
/* if so, we assume this actually is a year. This is a format found
* e.g. in Cisco devices.
*
year = hour;
*/
/* re-query the hour, this time it must be valid */
if(len == 0 || *p++ != ' ')
goto done;
--len;
hour = hParseInt(&p, &len);
}
if(hour < 0 || hour > 23)
goto done;
if(len == 0 || *p++ != ':')
goto done;
--len;
minute = hParseInt(&p, &len);
if(minute < 0 || minute > 59)
goto done;
if(len == 0 || *p++ != ':')
goto done;
--len;
second = hParseInt(&p, &len);
if(second < 0 || second > 60)
goto done;
/* we provide support for an extra ":" after the date. While this is an
* invalid format, it occurs frequently enough (e.g. with Cisco devices)
* to permit it as a valid case. -- rgerhards, 2008-09-12
*/
if(len > 0 && *p == ':') {
++p; /* just skip past it */
--len;
}
/* we had success, so update parse pointer */
*parsed = orglen - len;
r = 0; /* success */
done:
return r;
}
/**
* Parse a Number.
* Note that a number is an abstracted concept. We always represent it
* as 64 bits (but may later change our mind if performance dictates so).
*/
PARSER(Number)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
for (i = *offs; i < strLen && isdigit(c[i]); i++);
if (i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse a Real-number in floating-pt form.
*/
PARSER(Float)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
int seen_point = 0;
i = *offs;
if (c[i] == '-') i++;
for (; i < strLen; i++) {
if (c[i] == '.') {
if (seen_point != 0) break;
seen_point = 1;
} else if (! isdigit(c[i])) {
break;
}
}
if (i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse a hex Number.
* A hex number begins with 0x and contains only hex digits until the terminating
* whitespace. Note that if a non-hex character is deteced inside the number string,
* this is NOT considered to be a number.
*/
PARSER(HexNumber)
const char *c;
size_t i = *offs;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
if(c[i] != '0' || c[i+1] != 'x')
goto done;
for (i += 2 ; i < strLen && isxdigit(c[i]); i++);
if (i == *offs || !isspace(c[i]))
goto done;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse a kernel timestamp.
* This is a fixed format, see
* https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/printk/printk.c?id=refs/tags/v4.0#n1011
* This is the code that generates it:
* sprintf(buf, "[%5lu.%06lu] ", (unsigned long)ts, rem_nsec / 1000);
* We accept up to 12 digits for ts, everything above that for sure is
* no timestamp.
*/
#define LEN_KERNEL_TIMESTAMP 14
PARSER(KernelTimestamp)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(c[i] != '[' || i+LEN_KERNEL_TIMESTAMP > strLen
|| !isdigit(c[i+1])
|| !isdigit(c[i+2])
|| !isdigit(c[i+3])
|| !isdigit(c[i+4])
|| !isdigit(c[i+5])
)
goto done;
i += 6;
for(int j = 0 ; j < 7 && i < strLen && isdigit(c[i]) ; )
++i, ++j; /* just scan */
if(i >= strLen || c[i] != '.')
goto done;
++i; /* skip over '.' */
if( i+7 > strLen
|| !isdigit(c[i+0])
|| !isdigit(c[i+1])
|| !isdigit(c[i+2])
|| !isdigit(c[i+3])
|| !isdigit(c[i+4])
|| !isdigit(c[i+5])
|| c[i+6] != ']'
)
goto done;
i += 7;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse whitespace.
* This parses all whitespace until the first non-whitespace character
* is found. This is primarily a tool to skip to the next "word" if
* the exact number of whitspace characters (and type of whitespace)
* is not known. The current parsing position MUST be on a whitspace,
* else the parser does not match.
* This parser is also a forward-compatibility tool for the upcoming
* slsa (simple log structure analyser) tool.
*/
PARSER(Whitespace)
const char *c;
size_t i = *offs;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
if(!isspace(c[i]))
goto done;
for (i++ ; i < strLen && isspace(c[i]); i++);
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse a word.
* A word is a SP-delimited entity. The parser always works, except if
* the offset is position on a space upon entry.
*/
PARSER(Word)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
/* search end of word */
while(i < strLen && c[i] != ' ')
i++;
if(i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse everything up to a specific string.
* swisskid, 2015-01-21
*/
PARSER(StringTo)
const char *c;
char *toFind = NULL;
size_t i, j, k, m;
int chkstr;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
assert(ed != NULL);
k = es_strlen(ed) - 1;
toFind = es_str2cstr(ed, NULL);
c = str;
i = *offs;
chkstr = 0;
/* Total hunt for letter */
while(chkstr == 0 && i < strLen ) {
i++;
if(c[i] == toFind[0]) {
/* Found the first letter, now find the rest of the string */
j = 0;
m = i;
while(m < strLen && j < k ) {
m++;
j++;
if(c[m] != toFind[j])
break;
if (j == k)
chkstr = 1;
}
}
}
if(i == *offs || i == strLen || c[i] != toFind[0])
goto done;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
if(toFind != NULL) free(toFind);
return r;
}
/**
* Parse a alphabetic word.
* A alpha word is composed of characters for which isalpha returns true.
* The parser dones if there is no alpha character at all.
*/
PARSER(Alpha)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
/* search end of word */
while(i < strLen && isalpha(c[i]))
i++;
if(i == *offs) {
goto done;
}
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse everything up to a specific character.
* The character must be the only char inside extra data passed to the parser.
* It is a program error if strlen(ed) != 1. It is considered a format error if
* a) the to-be-parsed buffer is already positioned on the terminator character
* b) there is no terminator until the end of the buffer
* In those cases, the parsers declares itself as not being successful, in all
* other cases a string is extracted.
*/
PARSER(CharTo)
const char *c;
unsigned char cTerm;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
assert(es_strlen(ed) == 1);
cTerm = *(es_getBufAddr(ed));
c = str;
i = *offs;
/* search end of word */
while(i < strLen && c[i] != cTerm)
i++;
if(i == *offs || i == strLen || c[i] != cTerm)
goto done;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse everything up to a specific character, or up to the end of string.
* The character must be the only char inside extra data passed to the parser.
* It is a program error if strlen(ed) != 1.
* This parser always returns success.
* By nature of the parser, it is required that end of string or the separator
* follows this field in rule.
*/
PARSER(CharSeparated)
const char *c;
unsigned char cTerm;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
assert(es_strlen(ed) == 1);
cTerm = *(es_getBufAddr(ed));
c = str;
i = *offs;
/* search end of word */
while(i < strLen && c[i] != cTerm)
i++;
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
return r;
}
/**
* Parse yet-to-be-matched portion of string by re-applying
* top-level rules again.
*/
#define DEFAULT_REMAINING_FIELD_NAME "tail"
struct recursive_parser_data_s {
ln_ctx ctx;
char* remaining_field;
int free_ctx;
};
PARSER(Recursive)
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
struct recursive_parser_data_s* pData = (struct recursive_parser_data_s*) node->parser_data;
if (pData != NULL) {
int remaining_len = strLen - *offs;
const char *remaining_str = str + *offs;
json_object *unparsed = NULL;
CHKN(*value = json_object_new_object());
ln_normalize(pData->ctx, remaining_str, remaining_len, value);
if (json_object_object_get_ex(*value, UNPARSED_DATA_KEY, &unparsed)) {
json_object_put(*value);
*value = NULL;
*parsed = 0;
} else if (pData->remaining_field != NULL
&& json_object_object_get_ex(*value, pData->remaining_field, &unparsed)) {
*parsed = strLen - *offs - json_object_get_string_len(unparsed);
json_object_object_del(*value, pData->remaining_field);
} else {
*parsed = strLen - *offs;
}
}
r = 0; /* success */
done:
return r;
}
typedef ln_ctx (ctx_constructor)(ln_ctx, pcons_args_t*, const char*);
static void*
_recursive_parser_data_constructor(ln_fieldList_t *node,
ln_ctx ctx,
int no_of_args,
int remaining_field_arg_idx,
int free_ctx, ctx_constructor *fn)
{
int r = LN_BADCONFIG;
char* name = NULL;
struct recursive_parser_data_s *pData = NULL;
pcons_args_t *args = NULL;
CHKN(name = es_str2cstr(node->name, NULL));
CHKN(pData = calloc(1, sizeof(struct recursive_parser_data_s)));
pData->free_ctx = free_ctx;
pData->remaining_field = NULL;
CHKN(args = pcons_args(node->raw_data, no_of_args));
CHKN(pData->ctx = fn(ctx, args, name));
CHKN(pData->remaining_field = pcons_arg_copy(args, remaining_field_arg_idx, DEFAULT_REMAINING_FIELD_NAME));
r = 0;
done:
if (r != 0) {
if (name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for recursive/descent field name");
else if (pData == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for parser-data for field: %s", name);
else if (args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for argument-parsing for field: %s", name);
else if (pData->ctx == NULL)
ln_dbgprintf(ctx, "recursive/descent normalizer context creation "
"doneed for field: %s", name);
else if (pData->remaining_field == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for remaining-field name for "
"recursive/descent field: %s", name);
recursive_parser_data_destructor((void**) &pData);
}
free(name);
free_pcons_args(&args);
return pData;
}
static ln_ctx identity_recursive_parse_ctx_constructor(ln_ctx parent_ctx,
__attribute__((unused)) pcons_args_t* args,
__attribute__((unused)) const char* field_name) {
return parent_ctx;
}
void* recursive_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
return _recursive_parser_data_constructor(node, ctx, 1, 0, 0, identity_recursive_parse_ctx_constructor);
}
static ln_ctx child_recursive_parse_ctx_constructor(ln_ctx parent_ctx, pcons_args_t* args, const char* field_name) {
int r = LN_BADCONFIG;
const char* rb = NULL;
ln_ctx ctx = NULL;
pcons_unescape_arg(args, 0);
CHKN(rb = pcons_arg(args, 0, NULL));
CHKN(ctx = ln_v1_inherittedCtx(parent_ctx));
CHKR(ln_v1_loadSamples(ctx, rb));
done:
if (r != 0) {
if (rb == NULL)
ln_dbgprintf(parent_ctx, "file-name for descent rulebase not provided for field: %s",
field_name);
else if (ctx == NULL)
ln_dbgprintf(parent_ctx, "couldn't allocate memory to create descent-field normalizer "
"context for field: %s", field_name);
else
ln_dbgprintf(parent_ctx, "couldn't load samples into descent context for field: %s",
field_name);
if (ctx != NULL) ln_exitCtx(ctx);
ctx = NULL;
}
return ctx;
}
void* descent_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
return _recursive_parser_data_constructor(node, ctx, 2, 1, 1, child_recursive_parse_ctx_constructor);
}
void recursive_parser_data_destructor(void** dataPtr) {
if (*dataPtr != NULL) {
struct recursive_parser_data_s *pData = (struct recursive_parser_data_s*) *dataPtr;
if (pData->free_ctx && pData->ctx != NULL) {
ln_exitCtx(pData->ctx);
pData->ctx = NULL;
}
if (pData->remaining_field != NULL) free(pData->remaining_field);
free(pData);
*dataPtr = NULL;
}
};
/**
* Parse string tokenized by given char-sequence
* The sequence may appear 0 or more times, but zero times means 1 token.
* NOTE: its not 0 tokens, but 1 token.
*
* The token found is parsed according to the field-type provided after
* tokenizer char-seq.
*/
#define DEFAULT_MATCHED_FIELD_NAME "default"
struct tokenized_parser_data_s {
es_str_t *tok_str;
ln_ctx ctx;
char *remaining_field;
int use_default_field;
int free_ctx;
};
typedef struct tokenized_parser_data_s tokenized_parser_data_t;
PARSER(Tokenized)
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
tokenized_parser_data_t *pData = (tokenized_parser_data_t*) node->parser_data;
if (pData != NULL ) {
json_object *json_p = NULL;
if (pData->use_default_field) CHKN(json_p = json_object_new_object());
json_object *matches = NULL;
CHKN(matches = json_object_new_array());
int remaining_len = strLen - *offs;
const char *remaining_str = str + *offs;
json_object *remaining = NULL;
json_object *match = NULL;
while (remaining_len > 0) {
if (! pData->use_default_field) {
json_object_put(json_p);
json_p = json_object_new_object();
} /*TODO: handle null condition gracefully*/
ln_normalize(pData->ctx, remaining_str, remaining_len, &json_p);
if (remaining) json_object_put(remaining);
if (pData->use_default_field
&& json_object_object_get_ex(json_p, DEFAULT_MATCHED_FIELD_NAME, &match)) {
json_object_array_add(matches, json_object_get(match));
} else if (! (pData->use_default_field
|| json_object_object_get_ex(json_p, UNPARSED_DATA_KEY, &match))) {
json_object_array_add(matches, json_object_get(json_p));
} else {
if (json_object_array_length(matches) > 0) {
remaining_len += es_strlen(pData->tok_str);
break;
} else {
json_object_put(json_p);
json_object_put(matches);
FAIL(LN_WRONGPARSER);
}
}
if (json_object_object_get_ex(json_p, pData->remaining_field, &remaining)) {
remaining_len = json_object_get_string_len(remaining);
if (remaining_len > 0) {
remaining_str = json_object_get_string(json_object_get(remaining));
json_object_object_del(json_p, pData->remaining_field);
if (es_strbufcmp(pData->tok_str, (const unsigned char *)remaining_str,
es_strlen(pData->tok_str))) {
json_object_put(remaining);
break;
} else {
remaining_str += es_strlen(pData->tok_str);
remaining_len -= es_strlen(pData->tok_str);
}
}
} else {
remaining_len = 0;
break;
}
if (pData->use_default_field) json_object_object_del(json_p, DEFAULT_MATCHED_FIELD_NAME);
}
json_object_put(json_p);
/* success, persist */
*parsed = (strLen - *offs) - remaining_len;
*value = matches;
} else {
FAIL(LN_BADPARSERSTATE);
}
r = 0; /* success */
done:
return r;
}
void tokenized_parser_data_destructor(void** dataPtr) {
tokenized_parser_data_t *data = (tokenized_parser_data_t*) *dataPtr;
if (data->tok_str != NULL) es_deleteStr(data->tok_str);
if (data->free_ctx && (data->ctx != NULL)) ln_exitCtx(data->ctx);
if (data->remaining_field != NULL) free(data->remaining_field);
free(data);
*dataPtr = NULL;
}
static void load_generated_parser_samples(ln_ctx ctx,
const char* const field_descr, const int field_descr_len,
const char* const suffix, const int length) {
static const char* const RULE_PREFIX = "rule=:%"DEFAULT_MATCHED_FIELD_NAME":";/*TODO: extract nice constants*/
static const int RULE_PREFIX_LEN = 15;
char *sample_str = NULL;
es_str_t *field_decl = es_newStrFromCStr(RULE_PREFIX, RULE_PREFIX_LEN);
if (! field_decl) goto free;
if (es_addBuf(&field_decl, field_descr, field_descr_len)
|| es_addBuf(&field_decl, "%", 1)
|| es_addBuf(&field_decl, suffix, length)) {
ln_dbgprintf(ctx, "couldn't prepare field for tokenized field-picking: '%s'", field_descr);
goto free;
}
sample_str = es_str2cstr(field_decl, NULL);
if (! sample_str) {
ln_dbgprintf(ctx, "couldn't prepare sample-string for: '%s'", field_descr);
goto free;
}
ln_v1_loadSample(ctx, sample_str);
free:
if (sample_str) free(sample_str);
if (field_decl) es_deleteStr(field_decl);
}
static ln_ctx generate_context_with_field_as_prefix(ln_ctx parent, const char* field_descr, int field_descr_len) {
int r = LN_BADCONFIG;
const char* remaining_field = "%"DEFAULT_REMAINING_FIELD_NAME":rest%";
ln_ctx ctx = NULL;
CHKN(ctx = ln_v1_inherittedCtx(parent));
load_generated_parser_samples(ctx, field_descr, field_descr_len, remaining_field, strlen(remaining_field));
load_generated_parser_samples(ctx, field_descr, field_descr_len, "", 0);
r = 0;
done:
if (r != 0) {
ln_exitCtx(ctx);
ctx = NULL;
}
return ctx;
}
static ln_fieldList_t* parse_tokenized_content_field(ln_ctx ctx, const char* field_descr, size_t field_descr_len) {
es_str_t* tmp = NULL;
es_str_t* descr = NULL;
ln_fieldList_t *node = NULL;
int r = 0;
CHKN(tmp = es_newStr(80));
CHKN(descr = es_newStr(80));
const char* field_prefix = "%" DEFAULT_MATCHED_FIELD_NAME ":";
CHKR(es_addBuf(&descr, field_prefix, strlen(field_prefix)));
CHKR(es_addBuf(&descr, field_descr, field_descr_len));
CHKR(es_addChar(&descr, '%'));
es_size_t offset = 0;
CHKN(node = ln_v1_parseFieldDescr(ctx, descr, &offset, &tmp, &r));
if (offset != es_strlen(descr)) FAIL(LN_BADPARSERSTATE);
done:
if (r != 0) {
if (node != NULL) ln_deletePTreeNode(node);
node = NULL;
}
if (descr != NULL) es_deleteStr(descr);
if (tmp != NULL) es_deleteStr(tmp);
return node;
}
void* tokenized_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
int r = LN_BADCONFIG;
char* name = es_str2cstr(node->name, NULL);
pcons_args_t *args = NULL;
tokenized_parser_data_t *pData = NULL;
const char *field_descr = NULL;
ln_fieldList_t* field = NULL;
const char *tok = NULL;
CHKN(args = pcons_args(node->raw_data, 2));
CHKN(pData = calloc(1, sizeof(tokenized_parser_data_t)));
pcons_unescape_arg(args, 0);
CHKN(tok = pcons_arg(args, 0, NULL));
CHKN(pData->tok_str = es_newStrFromCStr(tok, strlen(tok)));
es_unescapeStr(pData->tok_str);
CHKN(field_descr = pcons_arg(args, 1, NULL));
const int field_descr_len = strlen(field_descr);
pData->free_ctx = 1;
CHKN(field = parse_tokenized_content_field(ctx, field_descr, field_descr_len));
if (field->parser == ln_parseRecursive) {
pData->use_default_field = 0;
struct recursive_parser_data_s *dat = (struct recursive_parser_data_s*) field->parser_data;
if (dat != NULL) {
CHKN(pData->remaining_field = strdup(dat->remaining_field));
pData->free_ctx = dat->free_ctx;
pData->ctx = dat->ctx;
dat->free_ctx = 0;
}
} else {
pData->use_default_field = 1;
CHKN(pData->ctx = generate_context_with_field_as_prefix(ctx, field_descr, field_descr_len));
}
if (pData->remaining_field == NULL) CHKN(pData->remaining_field = strdup(DEFAULT_REMAINING_FIELD_NAME));
r = 0;
done:
if (r != 0) {
if (name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for tokenized-field name");
else if (args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for argument-parsing for field: %s", name);
else if (pData == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for parser-data for field: %s", name);
else if (tok == NULL)
ln_dbgprintf(ctx, "token-separator not provided for field: %s", name);
else if (pData->tok_str == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for token-separator "
"for field: %s", name);
else if (field_descr == NULL)
ln_dbgprintf(ctx, "field-type not provided for field: %s", name);
else if (field == NULL)
ln_dbgprintf(ctx, "couldn't resolve single-token field-type for tokenized field: %s", name);
else if (pData->ctx == NULL)
ln_dbgprintf(ctx, "couldn't initialize normalizer-context for field: %s", name);
else if (pData->remaining_field == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for "
"remaining-field-name for field: %s", name);
if (pData) tokenized_parser_data_destructor((void**) &pData);
}
if (name != NULL) free(name);
if (field != NULL) ln_deletePTreeNode(field);
if (args) free_pcons_args(&args);
return pData;
}
#ifdef FEATURE_REGEXP
/**
* Parse string matched by provided posix extended regex.
*
* Please note that using regex field in most cases will be
* significantly slower than other field-types.
*/
struct regex_parser_data_s {
pcre *re;
int consume_group;
int return_group;
int max_groups;
};
PARSER(Regex)
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
unsigned int* ovector = NULL;
struct regex_parser_data_s *pData = (struct regex_parser_data_s*) node->parser_data;
if (pData != NULL) {
ovector = calloc(pData->max_groups, sizeof(unsigned int) * 3);
if (ovector == NULL) FAIL(LN_NOMEM);
int result = pcre_exec(pData->re, NULL, str, strLen, *offs, 0, (int*) ovector, pData->max_groups * 3);
if (result == 0) result = pData->max_groups;
if (result > pData->consume_group) {
/*please check 'man 3 pcreapi' for cryptic '2 * n' and '2 * n + 1' magic*/
if (ovector[2 * pData->consume_group] == *offs) {
*parsed = ovector[2 * pData->consume_group + 1] - ovector[2 * pData->consume_group];
if (pData->consume_group != pData->return_group) {
char* val = NULL;
if((val = strndup(str + ovector[2 * pData->return_group],
ovector[2 * pData->return_group + 1] -
ovector[2 * pData->return_group])) == NULL) {
free(ovector);
FAIL(LN_NOMEM);
}
*value = json_object_new_string(val);
free(val);
if (*value == NULL) {
free(ovector);
FAIL(LN_NOMEM);
}
}
}
}
free(ovector);
}
r = 0; /* success */
done:
return r;
}
static const char* regex_parser_configure_consume_and_return_group(pcons_args_t* args,
struct regex_parser_data_s *pData) {
const char* consume_group_parse_error = "couldn't parse consume-group number";
const char* return_group_parse_error = "couldn't parse return-group number";
char* tmp = NULL;
const char* consume_grp_str = NULL;
const char* return_grp_str = NULL;
if ((consume_grp_str = pcons_arg(args, 1, "0")) == NULL ||
strlen(consume_grp_str) == 0) return consume_group_parse_error;
if ((return_grp_str = pcons_arg(args, 2, consume_grp_str)) == NULL ||
strlen(return_grp_str) == 0) return return_group_parse_error;
errno = 0;
pData->consume_group = strtol(consume_grp_str, &tmp, 10);
if (errno != 0 || strlen(tmp) != 0) return consume_group_parse_error;
pData->return_group = strtol(return_grp_str, &tmp, 10);
if (errno != 0 || strlen(tmp) != 0) return return_group_parse_error;
return NULL;
}
void* regex_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
int r = LN_BADCONFIG;
char* exp = NULL;
const char* grp_parse_err = NULL;
pcons_args_t* args = NULL;
char* name = NULL;
struct regex_parser_data_s *pData = NULL;
const char *unescaped_exp = NULL;
const char *error = NULL;
int erroffset = 0;
CHKN(name = es_str2cstr(node->name, NULL));
if (! ctx->opts & LN_CTXOPT_ALLOW_REGEX) FAIL(LN_BADCONFIG);
CHKN(pData = malloc(sizeof(struct regex_parser_data_s)));
pData->re = NULL;
CHKN(args = pcons_args(node->raw_data, 3));
pData->consume_group = pData->return_group = 0;
CHKN(unescaped_exp = pcons_arg(args, 0, NULL));
pcons_unescape_arg(args, 0);
CHKN(exp = pcons_arg_copy(args, 0, NULL));
if ((grp_parse_err = regex_parser_configure_consume_and_return_group(args, pData)) != NULL)
FAIL(LN_BADCONFIG);
CHKN(pData->re = pcre_compile(exp, 0, &error, &erroffset, NULL));
pData->max_groups = ((pData->consume_group > pData->return_group) ? pData->consume_group :
pData->return_group) + 1;
r = 0;
done:
if (r != 0) {
if (name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory regex-field name");
else if (! ctx->opts & LN_CTXOPT_ALLOW_REGEX)
ln_dbgprintf(ctx, "regex support is not enabled for: '%s' "
"(please check lognorm context initialization)", name);
else if (pData == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for parser-data for field: %s", name);
else if (args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for argument-parsing for field: %s", name);
else if (unescaped_exp == NULL)
ln_dbgprintf(ctx, "regular-expression missing for field: '%s'", name);
else if (exp == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for regex-string for field: '%s'", name);
else if (grp_parse_err != NULL)
ln_dbgprintf(ctx, "%s for: '%s'", grp_parse_err, name);
else if (pData->re == NULL)
ln_dbgprintf(ctx, "couldn't compile regex(encountered error '%s' at char '%d' in pattern) "
"for regex-matched field: '%s'", error, erroffset, name);
regex_parser_data_destructor((void**)&pData);
}
if (exp != NULL) free(exp);
if (args != NULL) free_pcons_args(&args);
if (name != NULL) free(name);
return pData;
}
void regex_parser_data_destructor(void** dataPtr) {
if ((*dataPtr) != NULL) {
struct regex_parser_data_s *pData = (struct regex_parser_data_s*) *dataPtr;
if (pData->re != NULL) pcre_free(pData->re);
free(pData);
*dataPtr = NULL;
}
}
#endif
/**
* Parse yet-to-be-matched portion of string by re-applying
* top-level rules again.
*/
typedef enum interpret_type {
/* If you change this, be sure to update json_type_to_name() too */
it_b10int,
it_b16int,
it_floating_pt,
it_boolean
} interpret_type;
struct interpret_parser_data_s {
ln_ctx ctx;
enum interpret_type intrprt;
};
static json_object* interpret_as_int(json_object *value, int base) {
if (json_object_is_type(value, json_type_string)) {
return json_object_new_int64(strtol(json_object_get_string(value), NULL, base));
} else if (json_object_is_type(value, json_type_int)) {
return value;
} else {
return NULL;
}
}
static json_object* interpret_as_double(json_object *value) {
double val = json_object_get_double(value);
return json_object_new_double(val);
}
static json_object* interpret_as_boolean(json_object *value) {
json_bool val;
if (json_object_is_type(value, json_type_string)) {
const char* str = json_object_get_string(value);
val = (strcasecmp(str, "false") == 0 || strcasecmp(str, "no") == 0) ? 0 : 1;
} else {
val = json_object_get_boolean(value);
}
return json_object_new_boolean(val);
}
static int reinterpret_value(json_object **value, enum interpret_type to_type) {
switch(to_type) {
case it_b10int:
*value = interpret_as_int(*value, 10);
break;
case it_b16int:
*value = interpret_as_int(*value, 16);
break;
case it_floating_pt:
*value = interpret_as_double(*value);
break;
case it_boolean:
*value = interpret_as_boolean(*value);
break;
default:
return 0;
}
return 1;
}
PARSER(Interpret)
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
json_object *unparsed = NULL;
json_object *parsed_raw = NULL;
struct interpret_parser_data_s* pData = (struct interpret_parser_data_s*) node->parser_data;
if (pData != NULL) {
int remaining_len = strLen - *offs;
const char *remaining_str = str + *offs;
CHKN(parsed_raw = json_object_new_object());
ln_normalize(pData->ctx, remaining_str, remaining_len, &parsed_raw);
if (json_object_object_get_ex(parsed_raw, UNPARSED_DATA_KEY, NULL)) {
*parsed = 0;
} else {
json_object_object_get_ex(parsed_raw, DEFAULT_MATCHED_FIELD_NAME, value);
json_object_object_get_ex(parsed_raw, DEFAULT_REMAINING_FIELD_NAME, &unparsed);
if (reinterpret_value(value, pData->intrprt)) {
*parsed = strLen - *offs - json_object_get_string_len(unparsed);
}
}
json_object_put(parsed_raw);
}
r = 0; /* success */
done:
return r;
}
void* interpret_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
int r = LN_BADCONFIG;
char* name = NULL;
struct interpret_parser_data_s *pData = NULL;
pcons_args_t *args = NULL;
int bad_interpret = 0;
const char* type_str = NULL;
const char *field_type = NULL;
CHKN(name = es_str2cstr(node->name, NULL));
CHKN(pData = calloc(1, sizeof(struct interpret_parser_data_s)));
CHKN(args = pcons_args(node->raw_data, 2));
CHKN(type_str = pcons_arg(args, 0, NULL));
if (strcmp(type_str, "int") == 0 || strcmp(type_str, "base10int") == 0) {
pData->intrprt = it_b10int;
} else if (strcmp(type_str, "base16int") == 0) {
pData->intrprt = it_b16int;
} else if (strcmp(type_str, "float") == 0) {
pData->intrprt = it_floating_pt;
} else if (strcmp(type_str, "bool") == 0) {
pData->intrprt = it_boolean;
} else {
bad_interpret = 1;
FAIL(LN_BADCONFIG);
}
CHKN(field_type = pcons_arg(args, 1, NULL));
CHKN(pData->ctx = generate_context_with_field_as_prefix(ctx, field_type, strlen(field_type)));
r = 0;
done:
if (r != 0) {
if (name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for interpret-field name");
else if (pData == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for parser-data for field: %s", name);
else if (args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for argument-parsing for field: %s", name);
else if (type_str == NULL)
ln_dbgprintf(ctx, "no type provided for interpretation of field: %s", name);
else if (bad_interpret != 0)
ln_dbgprintf(ctx, "interpretation to unknown type '%s' requested for field: %s",
type_str, name);
else if (field_type == NULL)
ln_dbgprintf(ctx, "field-type to actually match the content not provided for "
"field: %s", name);
else if (pData->ctx == NULL)
ln_dbgprintf(ctx, "couldn't instantiate the normalizer context for matching "
"field: %s", name);
interpret_parser_data_destructor((void**) &pData);
}
free(name);
free_pcons_args(&args);
return pData;
}
void interpret_parser_data_destructor(void** dataPtr) {
if (*dataPtr != NULL) {
struct interpret_parser_data_s *pData = (struct interpret_parser_data_s*) *dataPtr;
if (pData->ctx != NULL) ln_exitCtx(pData->ctx);
free(pData);
*dataPtr = NULL;
}
};
/**
* Parse suffixed char-sequence, where suffix is one of many possible suffixes.
*/
struct suffixed_parser_data_s {
int nsuffix;
int *suffix_offsets;
int *suffix_lengths;
char* suffixes_str;
ln_ctx ctx;
char* value_field_name;
char* suffix_field_name;
};
PARSER(Suffixed) {
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
json_object *unparsed = NULL;
json_object *parsed_raw = NULL;
json_object *parsed_value = NULL;
json_object *result = NULL;
json_object *suffix = NULL;
struct suffixed_parser_data_s *pData = (struct suffixed_parser_data_s*) node->parser_data;
if (pData != NULL) {
int remaining_len = strLen - *offs;
const char *remaining_str = str + *offs;
int i;
CHKN(parsed_raw = json_object_new_object());
ln_normalize(pData->ctx, remaining_str, remaining_len, &parsed_raw);
if (json_object_object_get_ex(parsed_raw, UNPARSED_DATA_KEY, NULL)) {
*parsed = 0;
} else {
json_object_object_get_ex(parsed_raw, DEFAULT_MATCHED_FIELD_NAME, &parsed_value);
json_object_object_get_ex(parsed_raw, DEFAULT_REMAINING_FIELD_NAME, &unparsed);
const char* unparsed_frag = json_object_get_string(unparsed);
for(i = 0; i < pData->nsuffix; i++) {
const char* possible_suffix = pData->suffixes_str + pData->suffix_offsets[i];
int len = pData->suffix_lengths[i];
if (strncmp(possible_suffix, unparsed_frag, len) == 0) {
CHKN(result = json_object_new_object());
CHKN(suffix = json_object_new_string(possible_suffix));
json_object_get(parsed_value);
json_object_object_add(result, pData->value_field_name, parsed_value);
json_object_object_add(result, pData->suffix_field_name, suffix);
*parsed = strLen - *offs - json_object_get_string_len(unparsed) + len;
break;
}
}
if (result != NULL) {
*value = result;
}
}
}
FAILParser
if (r != 0) {
if (result != NULL) json_object_put(result);
}
if (parsed_raw != NULL) json_object_put(parsed_raw);
} ENDFailParser
static struct suffixed_parser_data_s* _suffixed_parser_data_constructor(ln_fieldList_t *node,
ln_ctx ctx,
es_str_t* raw_args,
const char* value_field,
const char* suffix_field) {
int r = LN_BADCONFIG;
pcons_args_t* args = NULL;
char* name = NULL;
struct suffixed_parser_data_s *pData = NULL;
const char *escaped_tokenizer = NULL;
const char *uncopied_suffixes_str = NULL;
const char *tokenizer = NULL;
char *suffixes_str = NULL;
const char *field_type = NULL;
char *tok_saveptr = NULL;
char *tok_input = NULL;
int i = 0;
char *tok = NULL;
CHKN(name = es_str2cstr(node->name, NULL));
CHKN(pData = calloc(1, sizeof(struct suffixed_parser_data_s)));
if (value_field == NULL) value_field = "value";
if (suffix_field == NULL) suffix_field = "suffix";
pData->value_field_name = strdup(value_field);
pData->suffix_field_name = strdup(suffix_field);
CHKN(args = pcons_args(raw_args, 3));
CHKN(escaped_tokenizer = pcons_arg(args, 0, NULL));
pcons_unescape_arg(args, 0);
CHKN(tokenizer = pcons_arg(args, 0, NULL));
CHKN(uncopied_suffixes_str = pcons_arg(args, 1, NULL));
pcons_unescape_arg(args, 1);
CHKN(suffixes_str = pcons_arg_copy(args, 1, NULL));
tok_input = suffixes_str;
while (strtok_r(tok_input, tokenizer, &tok_saveptr) != NULL) {
tok_input = NULL;
pData->nsuffix++;
}
if (pData->nsuffix == 0) {
FAIL(LN_INVLDFDESCR);
}
CHKN(pData->suffix_offsets = calloc(pData->nsuffix, sizeof(int)));
CHKN(pData->suffix_lengths = calloc(pData->nsuffix, sizeof(int)));
CHKN(pData->suffixes_str = pcons_arg_copy(args, 1, NULL));
tok_input = pData->suffixes_str;
while ((tok = strtok_r(tok_input, tokenizer, &tok_saveptr)) != NULL) {
tok_input = NULL;
pData->suffix_offsets[i] = tok - pData->suffixes_str;
pData->suffix_lengths[i++] = strlen(tok);
}
CHKN(field_type = pcons_arg(args, 2, NULL));
CHKN(pData->ctx = generate_context_with_field_as_prefix(ctx, field_type, strlen(field_type)));
r = 0;
done:
if (r != 0) {
if (name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory suffixed-field name");
else if (pData == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for parser-data for field: %s", name);
else if (pData->value_field_name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for value-field's name for field: %s", name);
else if (pData->suffix_field_name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for suffix-field's name for field: %s", name);
else if (args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for argument-parsing for field: %s", name);
else if (escaped_tokenizer == NULL)
ln_dbgprintf(ctx, "suffix token-string missing for field: '%s'", name);
else if (tokenizer == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for unescaping token-string for field: '%s'",
name);
else if (uncopied_suffixes_str == NULL)
ln_dbgprintf(ctx, "suffix-list missing for field: '%s'", name);
else if (suffixes_str == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for suffix-list for field: '%s'", name);
else if (pData->nsuffix == 0)
ln_dbgprintf(ctx, "could't read suffix-value(s) for field: '%s'", name);
else if (pData->suffix_offsets == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for suffix-list element references for field: "
"'%s'", name);
else if (pData->suffix_lengths == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for suffix-list element lengths for field: '%s'",
name);
else if (pData->suffixes_str == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for suffix-list for field: '%s'", name);
else if (field_type == NULL)
ln_dbgprintf(ctx, "field-type declaration missing for field: '%s'", name);
else if (pData->ctx == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for normalizer-context for field: '%s'", name);
suffixed_parser_data_destructor((void**)&pData);
}
free_pcons_args(&args);
if (suffixes_str != NULL) free(suffixes_str);
if (name != NULL) free(name);
return pData;
}
void* suffixed_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
return _suffixed_parser_data_constructor(node, ctx, node->raw_data, NULL, NULL);
}
void* named_suffixed_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx) {
int r = LN_BADCONFIG;
pcons_args_t* args = NULL;
char* name = NULL;
const char* value_field_name = NULL;
const char* suffix_field_name = NULL;
const char* remaining_args = NULL;
es_str_t* unnamed_suffix_args = NULL;
struct suffixed_parser_data_s* pData = NULL;
CHKN(name = es_str2cstr(node->name, NULL));
CHKN(args = pcons_args(node->raw_data, 3));
CHKN(value_field_name = pcons_arg(args, 0, NULL));
CHKN(suffix_field_name = pcons_arg(args, 1, NULL));
CHKN(remaining_args = pcons_arg(args, 2, NULL));
CHKN(unnamed_suffix_args = es_newStrFromCStr(remaining_args, strlen(remaining_args)));
CHKN(pData = _suffixed_parser_data_constructor(node, ctx, unnamed_suffix_args, value_field_name,
suffix_field_name));
r = 0;
done:
if (r != 0) {
if (name == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory named_suffixed-field name");
else if (args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for argument-parsing for field: %s", name);
else if (value_field_name == NULL)
ln_dbgprintf(ctx, "key-name for value not provided for field: %s", name);
else if (suffix_field_name == NULL)
ln_dbgprintf(ctx, "key-name for suffix not provided for field: %s", name);
else if (unnamed_suffix_args == NULL)
ln_dbgprintf(ctx, "couldn't allocate memory for unnamed-suffix-field args for field: %s",
name);
else if (pData == NULL)
ln_dbgprintf(ctx, "couldn't create parser-data for field: %s", name);
suffixed_parser_data_destructor((void**)&pData);
}
if (unnamed_suffix_args != NULL) free(unnamed_suffix_args);
if (args != NULL) free_pcons_args(&args);
if (name != NULL) free(name);
return pData;
}
void suffixed_parser_data_destructor(void** dataPtr) {
if ((*dataPtr) != NULL) {
struct suffixed_parser_data_s *pData = (struct suffixed_parser_data_s*) *dataPtr;
if (pData->suffixes_str != NULL) free(pData->suffixes_str);
if (pData->suffix_offsets != NULL) free(pData->suffix_offsets);
if (pData->suffix_lengths != NULL) free(pData->suffix_lengths);
if (pData->value_field_name != NULL) free(pData->value_field_name);
if (pData->suffix_field_name != NULL) free(pData->suffix_field_name);
if (pData->ctx != NULL) ln_exitCtx(pData->ctx);
free(pData);
*dataPtr = NULL;
}
}
/**
* Just get everything till the end of string.
*/
PARSER(Rest)
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
/* silence the warning about unused variable */
(void)str;
/* success, persist */
*parsed = strLen - *offs;
r = 0;
return r;
}
/**
* Parse a possibly quoted string. In this initial implementation, escaping of the quote
* char is not supported. A quoted string is one start starts with a double quote,
* has some text (not containing double quotes) and ends with the first double
* quote character seen. The extracted string does NOT include the quote characters.
* swisskid, 2015-01-21
*/
PARSER(OpQuotedString)
const char *c;
size_t i;
char *cstr = NULL;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(c[i] != '"') {
while(i < strLen && c[i] != ' ')
i++;
if(i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
/* create JSON value to save quoted string contents */
CHKN(cstr = strndup((char*)c + *offs, *parsed));
} else {
++i;
/* search end of string */
while(i < strLen && c[i] != '"')
i++;
if(i == strLen || c[i] != '"')
goto done;
/* success, persist */
*parsed = i + 1 - *offs; /* "eat" terminal double quote */
/* create JSON value to save quoted string contents */
CHKN(cstr = strndup((char*)c + *offs + 1, *parsed - 2));
}
CHKN(*value = json_object_new_string(cstr));
r = 0; /* success */
done:
free(cstr);
return r;
}
/**
* Parse a quoted string. In this initial implementation, escaping of the quote
* char is not supported. A quoted string is one start starts with a double quote,
* has some text (not containing double quotes) and ends with the first double
* quote character seen. The extracted string does NOT include the quote characters.
* rgerhards, 2011-01-14
*/
PARSER(QuotedString)
const char *c;
size_t i;
char *cstr = NULL;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(i + 2 > strLen)
goto done; /* needs at least 2 characters */
if(c[i] != '"')
goto done;
++i;
/* search end of string */
while(i < strLen && c[i] != '"')
i++;
if(i == strLen || c[i] != '"')
goto done;
/* success, persist */
*parsed = i + 1 - *offs; /* "eat" terminal double quote */
/* create JSON value to save quoted string contents */
CHKN(cstr = strndup((char*)c + *offs + 1, *parsed - 2));
CHKN(*value = json_object_new_string(cstr));
r = 0; /* success */
done:
free(cstr);
return r;
}
/**
* Parse an ISO date, that is YYYY-MM-DD (exactly this format).
* Note: we do manual loop unrolling -- this is fast AND efficient.
* rgerhards, 2011-01-14
*/
PARSER(ISODate)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(*offs+10 > strLen)
goto done; /* if it is not 10 chars, it can't be an ISO date */
/* year */
if(!isdigit(c[i])) goto done;
if(!isdigit(c[i+1])) goto done;
if(!isdigit(c[i+2])) goto done;
if(!isdigit(c[i+3])) goto done;
if(c[i+4] != '-') goto done;
/* month */
if(c[i+5] == '0') {
if(c[i+6] < '1' || c[i+6] > '9') goto done;
} else if(c[i+5] == '1') {
if(c[i+6] < '0' || c[i+6] > '2') goto done;
} else {
goto done;
}
if(c[i+7] != '-') goto done;
/* day */
if(c[i+8] == '0') {
if(c[i+9] < '1' || c[i+9] > '9') goto done;
} else if(c[i+8] == '1' || c[i+8] == '2') {
if(!isdigit(c[i+9])) goto done;
} else if(c[i+8] == '3') {
if(c[i+9] != '0' && c[i+9] != '1') goto done;
} else {
goto done;
}
/* success, persist */
*parsed = 10;
r = 0; /* success */
done:
return r;
}
/**
* Parse a Cisco interface spec. Sample for such a spec are:
* outside:192.168.52.102/50349
* inside:192.168.1.15/56543 (192.168.1.112/54543)
* outside:192.168.1.13/50179 (192.168.1.13/50179)(LOCAL\some.user)
* outside:192.168.1.25/41850(LOCAL\RG-867G8-DEL88D879BBFFC8)
* inside:192.168.1.25/53 (192.168.1.25/53) (some.user)
* 192.168.1.15/0(LOCAL\RG-867G8-DEL88D879BBFFC8)
* From this, we conclude the format is:
* [interface:]ip/port [SP (ip2/port2)] [[SP](username)]
* In order to match, this syntax must start on a non-whitespace char
* other than colon.
*/
PARSER(CiscoInterfaceSpec)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(c[i] == ':' || isspace(c[i])) goto done;
/* first, check if we have an interface. We do this by trying
* to detect if we have an IP. If we have, obviously no interface
* is present. Otherwise, we check if we have a valid interface.
*/
int bHaveInterface = 0;
size_t idxInterface = 0;
size_t lenInterface = 0;
int bHaveIP = 0;
size_t lenIP;
size_t idxIP = i;
if(ln_parseIPv4(str, strLen, &i, node, &lenIP, NULL) == 0) {
bHaveIP = 1;
i += lenIP - 1; /* position on delimiter */
} else {
idxInterface = i;
while(i < strLen) {
if(isspace(c[i])) goto done;
if(c[i] == ':')
break;
++i;
}
lenInterface = i - idxInterface;
bHaveInterface = 1;
}
if(i == strLen) goto done;
++i; /* skip over colon */
/* we now utilize our other parser helpers */
if(!bHaveIP) {
idxIP = i;
if(ln_parseIPv4(str, strLen, &i, node, &lenIP, NULL) != 0) goto done;
i += lenIP;
}
if(i == strLen || c[i] != '/') goto done;
++i; /* skip slash */
const size_t idxPort = i;
size_t lenPort;
if(ln_parseNumber(str, strLen, &i, node, &lenPort, NULL) != 0) goto done;
i += lenPort;
if(i == strLen) goto success;
/* check if optional second ip/port is present
* We assume we must at least have 5 chars [" (::1)"]
*/
int bHaveIP2 = 0;
size_t idxIP2 = 0, lenIP2 = 0;
size_t idxPort2 = 0, lenPort2 = 0;
if(i+5 < strLen && c[i] == ' ' && c[i+1] == '(') {
size_t iTmp = i+2; /* skip over " (" */
idxIP2 = iTmp;
if(ln_parseIPv4(str, strLen, &iTmp, node, &lenIP2, NULL) == 0) {
iTmp += lenIP2;
if(i < strLen || c[iTmp] == '/') {
++iTmp; /* skip slash */
idxPort2 = iTmp;
if(ln_parseNumber(str, strLen, &iTmp, node, &lenPort2, NULL) == 0) {
iTmp += lenPort2;
if(iTmp < strLen && c[iTmp] == ')') {
i = iTmp + 1; /* match, so use new index */
bHaveIP2 = 1;
}
}
}
}
}
/* check if optional username is present
* We assume we must at least have 3 chars ["(n)"]
*/
int bHaveUser = 0;
size_t idxUser = 0;
size_t lenUser = 0;
if( (i+2 < strLen && c[i] == '(' && !isspace(c[i+1]) )
|| (i+3 < strLen && c[i] == ' ' && c[i+1] == '(' && !isspace(c[i+2])) ) {
idxUser = i + ((c[i] == ' ') ? 2 : 1); /* skip [SP]'(' */
size_t iTmp = idxUser;
while(iTmp < strLen && !isspace(c[iTmp]) && c[iTmp] != ')')
++iTmp; /* just scan */
if(iTmp < strLen && c[iTmp] == ')') {
i = iTmp + 1; /* we have a match, so use new index */
bHaveUser = 1;
lenUser = iTmp - idxUser;
}
}
/* all done, save data */
if(value == NULL)
goto success;
CHKN(*value = json_object_new_object());
json_object *json;
if(bHaveInterface) {
CHKN(json = json_object_new_string_len(c+idxInterface, lenInterface));
json_object_object_add_ex(*value, "interface", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
CHKN(json = json_object_new_string_len(c+idxIP, lenIP));
json_object_object_add_ex(*value, "ip", json, JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
CHKN(json = json_object_new_string_len(c+idxPort, lenPort));
json_object_object_add_ex(*value, "port", json, JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
if(bHaveIP2) {
CHKN(json = json_object_new_string_len(c+idxIP2, lenIP2));
json_object_object_add_ex(*value, "ip2", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
CHKN(json = json_object_new_string_len(c+idxPort2, lenPort2));
json_object_object_add_ex(*value, "port2", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
if(bHaveUser) {
CHKN(json = json_object_new_string_len(c+idxUser, lenUser));
json_object_object_add_ex(*value, "user", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
success: /* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
*value = NULL; /* to be on the save side */
}
return r;
}
/**
* Parse a duration. A duration is similar to a timestamp, except that
* it tells about time elapsed. As such, hours can be larger than 23
* and hours may also be specified by a single digit (this, for example,
* is commonly done in Cisco software).
* Note: we do manual loop unrolling -- this is fast AND efficient.
*/
PARSER(Duration)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
/* hour is a bit tricky */
if(!isdigit(c[i])) goto done;
++i;
if(isdigit(c[i]))
++i;
if(c[i] == ':')
++i;
else
goto done;
if(i+5 > strLen)
goto done;/* if it is not 5 chars from here, it can't be us */
if(c[i] < '0' || c[i] > '5') goto done;
if(!isdigit(c[i+1])) goto done;
if(c[i+2] != ':') goto done;
if(c[i+3] < '0' || c[i+3] > '5') goto done;
if(!isdigit(c[i+4])) goto done;
/* success, persist */
*parsed = (i + 5) - *offs;
r = 0; /* success */
done:
return r;
}
/**
* Parse a timestamp in 24hr format (exactly HH:MM:SS).
* Note: we do manual loop unrolling -- this is fast AND efficient.
* rgerhards, 2011-01-14
*/
PARSER(Time24hr)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(*offs+8 > strLen)
goto done; /* if it is not 8 chars, it can't be us */
/* hour */
if(c[i] == '0' || c[i] == '1') {
if(!isdigit(c[i+1])) goto done;
} else if(c[i] == '2') {
if(c[i+1] < '0' || c[i+1] > '3') goto done;
} else {
goto done;
}
/* TODO: the code below is a duplicate of 24hr parser - create common function */
if(c[i+2] != ':') goto done;
if(c[i+3] < '0' || c[i+3] > '5') goto done;
if(!isdigit(c[i+4])) goto done;
if(c[i+5] != ':') goto done;
if(c[i+6] < '0' || c[i+6] > '5') goto done;
if(!isdigit(c[i+7])) goto done;
/* success, persist */
*parsed = 8;
r = 0; /* success */
done:
return r;
}
/**
* Parse a timestamp in 12hr format (exactly HH:MM:SS).
* Note: we do manual loop unrolling -- this is fast AND efficient.
* TODO: the code below is a duplicate of 24hr parser - create common function?
* rgerhards, 2011-01-14
*/
PARSER(Time12hr)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = str;
i = *offs;
if(*offs+8 > strLen)
goto done; /* if it is not 8 chars, it can't be us */
/* hour */
if(c[i] == '0') {
if(!isdigit(c[i+1])) goto done;
} else if(c[i] == '1') {
if(c[i+1] < '0' || c[i+1] > '2') goto done;
} else {
goto done;
}
if(c[i+2] != ':') goto done;
if(c[i+3] < '0' || c[i+3] > '5') goto done;
if(!isdigit(c[i+4])) goto done;
if(c[i+5] != ':') goto done;
if(c[i+6] < '0' || c[i+6] > '5') goto done;
if(!isdigit(c[i+7])) goto done;
/* success, persist */
*parsed = 8;
r = 0; /* success */
done:
return r;
}
/* helper to IPv4 address parser, checks the next set of numbers.
* Syntax 1 to 3 digits, value together not larger than 255.
* @param[in] str parse buffer
* @param[in/out] offs offset into buffer, updated if successful
* @return 0 if OK, 1 otherwise
*/
static int
chkIPv4AddrByte(const char *str, size_t strLen, size_t *offs)
{
int val = 0;
int r = 1; /* default: done -- simplifies things */
const char *c;
size_t i = *offs;
c = str;
if(i == strLen || !isdigit(c[i]))
goto done;
val = c[i++] - '0';
if(i < strLen && isdigit(c[i])) {
val = val * 10 + c[i++] - '0';
if(i < strLen && isdigit(c[i]))
val = val * 10 + c[i++] - '0';
}
if(val > 255) /* cannot be a valid IP address byte! */
goto done;
*offs = i;
r = 0;
done:
return r;
}
/**
* Parser for IPv4 addresses.
*/
PARSER(IPv4)
const char *c;
size_t i;
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
i = *offs;
if(i + 7 > strLen) {
/* IPv4 addr requires at least 7 characters */
goto done;
}
c = str;
/* byte 1*/
if(chkIPv4AddrByte(str, strLen, &i) != 0) goto done;
if(i == strLen || c[i++] != '.') goto done;
/* byte 2*/
if(chkIPv4AddrByte(str, strLen, &i) != 0) goto done;
if(i == strLen || c[i++] != '.') goto done;
/* byte 3*/
if(chkIPv4AddrByte(str, strLen, &i) != 0) goto done;
if(i == strLen || c[i++] != '.') goto done;
/* byte 4 - we do NOT need any char behind it! */
if(chkIPv4AddrByte(str, strLen, &i) != 0) goto done;
/* if we reach this point, we found a valid IP address */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/* skip past the IPv6 address block, parse pointer is set to
* first char after the block. Returns an error if already at end
* of string.
* @param[in] str parse buffer
* @param[in/out] offs offset into buffer, updated if successful
* @return 0 if OK, 1 otherwise
*/
static int
skipIPv6AddrBlock(const char *const __restrict__ str,
const size_t strLen,
size_t *const __restrict__ offs)
{
int j;
if(*offs == strLen)
return 1;
for(j = 0 ; j < 4 && *offs+j < strLen && isxdigit(str[*offs+j]) ; ++j)
/*just skip*/ ;
*offs += j;
return 0;
}
/**
* Parser for IPv6 addresses.
* Bases on RFC4291 Section 2.2. The address must be followed
* by whitespace or end-of-string, else it is not considered
* a valid address. This prevents false positives.
*/
PARSER(IPv6)
const char *c;
size_t i;
size_t beginBlock; /* last block begin in case we need IPv4 parsing */
int hasIPv4 = 0;
int nBlocks = 0; /* how many blocks did we already have? */
int bHad0Abbrev = 0; /* :: already used? */
assert(str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
i = *offs;
if(i + 2 > strLen) {
/* IPv6 addr requires at least 2 characters ("::") */
goto done;
}
c = str;
/* check that first block is non-empty */
if(! ( isxdigit(c[i]) || (c[i] == ':' && c[i+1] == ':') ) )
goto done;
/* try for all potential blocks plus one more (so we see errors!) */
for(int j = 0 ; j < 9 ; ++j) {
beginBlock = i;
if(skipIPv6AddrBlock(str, strLen, &i) != 0) goto done;
nBlocks++;
if(i == strLen) goto chk_ok;
if(isspace(c[i])) goto chk_ok;
if(c[i] == '.'){ /* IPv4 processing! */
hasIPv4 = 1;
break;
}
if(c[i] != ':') goto done;
i++; /* "eat" ':' */
if(i == strLen) goto chk_ok;
/* check for :: */
if(bHad0Abbrev) {
if(c[i] == ':') goto done;
} else {
if(c[i] == ':') {
bHad0Abbrev = 1;
++i;
if(i == strLen) goto chk_ok;
}
}
}
if(hasIPv4) {
size_t ipv4_parsed;
--nBlocks;
/* prevent pure IPv4 address to be recognized */
if(beginBlock == *offs) goto done;
i = beginBlock;
if(ln_parseIPv4(str, strLen, &i, node, &ipv4_parsed, NULL) != 0)
goto done;
i += ipv4_parsed;
}
chk_ok: /* we are finished parsing, check if things are ok */
if(nBlocks > 8) goto done;
if(bHad0Abbrev && nBlocks >= 8) goto done;
/* now check if trailing block is missing. Note that i is already
* on next character, so we need to go two back. Two are always
* present, else we would not reach this code here.
*/
if(c[i-1] == ':' && c[i-2] != ':') goto done;
/* if we reach this point, we found a valid IP address */
*parsed = i - *offs;
r = 0; /* success */
done:
return r;
}
/* check if a char is valid inside a name of the iptables motif.
* We try to keep the set as slim as possible, because the iptables
* parser may otherwise create a very broad match (especially the
* inclusion of simple words like "DF" cause grief here).
* Note: we have taken the permitted set from iptables log samples.
* Report bugs if we missed some additional rules.
*/
static inline int
isValidIPTablesNameChar(const char c)
{
/* right now, upper case only is valid */
return ('A' <= c && c <= 'Z') ? 1 : 0;
}
/* helper to iptables parser, parses out a a single name=value pair
*/
static int
parseIPTablesNameValue(const char *const __restrict__ str,
const size_t strLen,
size_t *const __restrict__ offs,
struct json_object *const __restrict__ valroot)
{
int r = LN_WRONGPARSER;
size_t i = *offs;
char *name = NULL;
const size_t iName = i;
while(i < strLen && isValidIPTablesNameChar(str[i]))
++i;
if(i == iName || (i < strLen && str[i] != '=' && str[i] != ' '))
goto done; /* no name at all! */
const ssize_t lenName = i - iName;
ssize_t iVal = -1;
size_t lenVal = i - iVal;
if(i < strLen && str[i] != ' ') {
/* we have a real value (not just a flag name like "DF") */
++i; /* skip '=' */
iVal = i;
while(i < strLen && !isspace(str[i]))
++i;
lenVal = i - iVal;
}
/* parsing OK */
*offs = i;
r = 0;
if(valroot == NULL)
goto done;
CHKN(name = malloc(lenName+1));
memcpy(name, str+iName, lenName);
name[lenName] = '\0';
json_object *json;
if(iVal == -1) {
json = NULL;
} else {
CHKN(json = json_object_new_string_len(str+iVal, lenVal));
}
json_object_object_add(valroot, name, json);
done:
free(name);
return r;
}
/**
* Parser for iptables logs (the structured part).
* This parser is named "v2-iptables" because of a traditional
* parser named "iptables", which we do not want to replace, at
* least right now (we may re-think this before the first release).
* For performance reasons, this works in two stages. In the first
* stage, we only detect if the motif is correct. The second stage is
* only called when we know it is. In it, we go once again over the
* message again and actually extract the data. This is done because
* data extraction is relatively expensive and in most cases we will
* have much more frequent mismatches than matches.
* Note that this motif must have at least one field, otherwise it
* could detect things that are not iptables to be it. Further limits
* may be imposed in the future as we see additional need.
* added 2015-04-30 rgerhards
*/
PARSER(v2IPTables)
size_t i = *offs;
int nfields = 0;
/* stage one */
while(i < strLen) {
CHKR(parseIPTablesNameValue(str, strLen, &i, NULL));
++nfields;
/* exactly one SP is permitted between fields */
if(i < strLen && str[i] == ' ')
++i;
}
if(nfields < 2) {
FAIL(LN_WRONGPARSER);
}
/* success, persist */
*parsed = i - *offs;
r = 0;
/* stage two */
if(value == NULL)
goto done;
i = *offs;
CHKN(*value = json_object_new_object());
while(i < strLen) {
CHKR(parseIPTablesNameValue(str, strLen, &i, *value));
while(i < strLen && isspace(str[i]))
++i;
}
done:
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
*value = NULL;
}
return r;
}
/**
* Parse JSON. This parser tries to find JSON data inside a message.
* If it finds valid JSON, it will extract it. Extra data after the
* JSON is permitted.
* Note: the json-c JSON parser treats whitespace after the actual
* json to be part of the json. So in essence, any whitespace is
* processed by this parser. We use the same semantics to keep things
* neatly in sync. If json-c changes for some reason or we switch to
* an alternate json lib, we probably need to be sure to keep that
* behaviour, and probably emulate it.
* added 2015-04-28 by rgerhards, v1.1.2
*/
PARSER(JSON)
const size_t i = *offs;
struct json_tokener *tokener = NULL;
if(str[i] != '{' && str[i] != ']') {
/* this can't be json, see RFC4627, Sect. 2
* see this bug in json-c:
* https://github.com/json-c/json-c/issues/181
* In any case, it's better to do this quick check,
* even if json-c did not have the bug because this
* check here is much faster than calling the parser.
*/
goto done;
}
if((tokener = json_tokener_new()) == NULL)
goto done;
struct json_object *const json
= json_tokener_parse_ex(tokener, str+i, (int) (strLen - i));
if(json == NULL)
goto done;
/* success, persist */
*parsed = (i + tokener->char_offset) - *offs;
r = 0; /* success */
if(value == NULL) {
json_object_put(json);
} else {
*value = json;
}
done:
if(tokener != NULL)
json_tokener_free(tokener);
return r;
}
/* check if a char is valid inside a name of a NameValue list
* The set of valid characters may be extended if there is good
* need to do so. We have selected the current set carefully, but
* may have overlooked some cases.
*/
static inline int
isValidNameChar(const char c)
{
return (isalnum(c)
|| c == '.'
|| c == '_'
|| c == '-'
) ? 1 : 0;
}
/* helper to NameValue parser, parses out a a single name=value pair
*
* name must be alphanumeric characters, value must be non-whitespace
* characters, if quoted than with symmetric quotes. Supported formats
* - name=value
* - name="value"
* - name='value'
* Note "name=" is valid and means a field with empty value.
* TODO: so far, quote characters are not permitted WITHIN quoted values.
*/
static int
parseNameValue(const char *const __restrict__ str,
const size_t strLen,
size_t *const __restrict__ offs,
struct json_object *const __restrict__ valroot)
{
int r = LN_WRONGPARSER;
size_t i = *offs;
char *name = NULL;
const size_t iName = i;
while(i < strLen && isValidNameChar(str[i]))
++i;
if(i == iName || str[i] != '=')
goto done; /* no name at all! */
const size_t lenName = i - iName;
++i; /* skip '=' */
const size_t iVal = i;
while(i < strLen && !isspace(str[i]))
++i;
const size_t lenVal = i - iVal;
/* parsing OK */
*offs = i;
r = 0;
if(valroot == NULL)
goto done;
CHKN(name = malloc(lenName+1));
memcpy(name, str+iName, lenName);
name[lenName] = '\0';
json_object *json;
CHKN(json = json_object_new_string_len(str+iVal, lenVal));
json_object_object_add(valroot, name, json);
done:
free(name);
return r;
}
/**
* Parse CEE syslog.
* This essentially is a JSON parser, with additional restrictions:
* The message must start with "@cee:" and json must immediately follow (whitespace permitted).
* after the JSON, there must be no other non-whitespace characters.
* In other words: the message must consist of a single JSON object,
* only.
* added 2015-04-28 by rgerhards, v1.1.2
*/
PARSER(CEESyslog)
size_t i = *offs;
struct json_tokener *tokener = NULL;
struct json_object *json = NULL;
if(strLen < i + 7 || /* "@cee:{}" is minimum text */
str[i] != '@' ||
str[i+1] != 'c' ||
str[i+2] != 'e' ||
str[i+3] != 'e' ||
str[i+4] != ':')
goto done;
/* skip whitespace */
for(i += 5 ; i < strLen && isspace(str[i]) ; ++i)
/* just skip */;
if(i == strLen || str[i] != '{')
goto done;
/* note: we do not permit arrays in CEE mode */
if((tokener = json_tokener_new()) == NULL)
goto done;
json = json_tokener_parse_ex(tokener, str+i, (int) (strLen - i));
if(json == NULL)
goto done;
if(i + tokener->char_offset != strLen)
goto done;
/* success, persist */
*parsed = strLen;
r = 0; /* success */
if(value != NULL) {
*value = json;
json = NULL; /* do NOT free below! */
}
done:
if(tokener != NULL)
json_tokener_free(tokener);
if(json != NULL)
json_object_put(json);
return r;
}
/**
* Parser for name/value pairs.
* On entry must point to alnum char. All following chars must be
* name/value pairs delimited by whitespace up until the end of string.
* For performance reasons, this works in two stages. In the first
* stage, we only detect if the motif is correct. The second stage is
* only called when we know it is. In it, we go once again over the
* message again and actually extract the data. This is done because
* data extraction is relatively expensive and in most cases we will
* have much more frequent mismatches than matches.
* added 2015-04-25 rgerhards
*/
PARSER(NameValue)
size_t i = *offs;
/* stage one */
while(i < strLen) {
CHKR(parseNameValue(str, strLen, &i, NULL));
while(i < strLen && isspace(str[i]))
++i;
}
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
/* stage two */
if(value == NULL)
goto done;
i = *offs;
CHKN(*value = json_object_new_object());
while(i < strLen) {
CHKR(parseNameValue(str, strLen, &i, *value));
while(i < strLen && isspace(str[i]))
++i;
}
/* TODO: fix mem leak if alloc json fails */
done:
return r;
}
/**
* Parse a MAC layer address.
* The standard (IEEE 802) format for printing MAC-48 addresses in
* human-friendly form is six groups of two hexadecimal digits,
* separated by hyphens (-) or colons (:), in transmission order
* (e.g. 01-23-45-67-89-ab or 01:23:45:67:89:ab ).
* This form is also commonly used for EUI-64.
* from: http://en.wikipedia.org/wiki/MAC_address
*
* This parser must start on a hex digit.
* added 2015-05-04 by rgerhards, v1.1.2
*/
PARSER(MAC48)
size_t i = *offs;
char delim;
if(strLen < i + 17 || /* this motif has exactly 17 characters */
!isxdigit(str[i]) ||
!isxdigit(str[i+1])
)
FAIL(LN_WRONGPARSER);
if(str[i+2] == ':')
delim = ':';
else if(str[i+2] == '-')
delim = '-';
else
FAIL(LN_WRONGPARSER);
/* first byte ok */
if(!isxdigit(str[i+3]) ||
!isxdigit(str[i+4]) ||
str[i+5] != delim || /* 2nd byte ok */
!isxdigit(str[i+6]) ||
!isxdigit(str[i+7]) ||
str[i+8] != delim || /* 3rd byte ok */
!isxdigit(str[i+9]) ||
!isxdigit(str[i+10]) ||
str[i+11] != delim || /* 4th byte ok */
!isxdigit(str[i+12]) ||
!isxdigit(str[i+13]) ||
str[i+14] != delim || /* 5th byte ok */
!isxdigit(str[i+15]) ||
!isxdigit(str[i+16]) /* 6th byte ok */
)
FAIL(LN_WRONGPARSER);
/* success, persist */
*parsed = 17;
r = 0; /* success */
if(value != NULL) {
CHKN(*value = json_object_new_string_len(str+i, 17));
}
done:
return r;
}
/* This parses the extension value and updates the index
* to point to the end of it.
*/
static int
cefParseExtensionValue(const char *const __restrict__ str,
const size_t strLen,
size_t *__restrict__ iEndVal)
{
int r = 0;
size_t i = *iEndVal;
size_t iLastWordBegin;
/* first find next unquoted equal sign and record begin of
* last word in front of it - this is the actual end of the
* current name/value pair and the begin of the next one.
*/
int hadSP = 0;
int inEscape = 0;
for(iLastWordBegin = 0 ; i < strLen ; ++i) {
if(inEscape) {
if(str[i] != '=' &&
str[i] != '\\' &&
str[i] != 'r' &&
str[i] != 'n')
FAIL(LN_WRONGPARSER);
inEscape = 0;
} else {
if(str[i] == '=') {
break;
} else if(str[i] == '\\') {
inEscape = 1;
} else if(str[i] == ' ') {
hadSP = 1;
} else {
if(hadSP) {
iLastWordBegin = i;
hadSP = 0;
}
}
}
}
/* Note: iLastWordBegin can never be at offset zero, because
* the CEF header starts there!
*/
if(i < strLen) {
*iEndVal = (iLastWordBegin == 0) ? i : iLastWordBegin - 1;
} else {
*iEndVal = i;
}
done:
return r;
}
/* must be positioned on first char of name, returns index
* of end of name.
* Note: ArcSight violates the CEF spec ifself: they generate
* leading underscores in their extension names, which are
* definetly not alphanumeric. We still accept them...
* They also seem to use dots.
*/
static int
cefParseName(const char *const __restrict__ str,
const size_t strLen,
size_t *const __restrict__ i)
{
int r = 0;
while(*i < strLen && str[*i] != '=') {
if(!(isalnum(str[*i]) || str[*i] == '_' || str[*i] == '.'))
FAIL(LN_WRONGPARSER);
++(*i);
}
done:
return r;
}
/* parse CEF extensions. They are basically name=value
* pairs with the ugly exception that values may contain
* spaces but need NOT to be quoted. Thankfully, at least
* names are specified as being alphanumeric without spaces
* in them. So we must add a lookahead parser to check if
* a word is a name (and thus the begin of a new pair) or
* not. This is done by subroutines.
*/
static int
cefParseExtensions(const char *const __restrict__ str,
const size_t strLen,
size_t *const __restrict__ offs,
json_object *const __restrict__ jroot)
{
int r = 0;
size_t i = *offs;
size_t iName, lenName;
size_t iValue, lenValue;
char *name = NULL;
char *value = NULL;
while(i < strLen) {
while(i < strLen && str[i] == ' ')
++i;
iName = i;
CHKR(cefParseName(str, strLen, &i));
if(i+1 >= strLen || str[i] != '=')
FAIL(LN_WRONGPARSER);
lenName = i - iName;
++i; /* skip '=' */
iValue = i;
CHKR(cefParseExtensionValue(str, strLen, &i));
lenValue = i - iValue;
++i; /* skip past value */
if(jroot != NULL) {
CHKN(name = malloc(sizeof(char) * (lenName + 1)));
memcpy(name, str+iName, lenName);
name[lenName] = '\0';
CHKN(value = malloc(sizeof(char) * (lenValue + 1)));
/* copy value but escape it */
size_t iDst = 0;
for(size_t iSrc = 0 ; iSrc < lenValue ; ++iSrc) {
if(str[iValue+iSrc] == '\\') {
++iSrc; /* we know the next char must exist! */
switch(str[iValue+iSrc]) {
case '=': value[iDst] = '=';
break;
case 'n': value[iDst] = '\n';
break;
case 'r': value[iDst] = '\r';
break;
case '\\': value[iDst] = '\\';
break;
default: break;
}
} else {
value[iDst] = str[iValue+iSrc];
}
++iDst;
}
value[iDst] = '\0';
json_object *json;
CHKN(json = json_object_new_string(value));
json_object_object_add(jroot, name, json);
free(name); name = NULL;
free(value); value = NULL;
}
}
done:
free(name);
free(value);
return r;
}
/* gets a CEF header field. Must be positioned on the
* first char after the '|' in front of field.
* Note that '|' may be escaped as "\|", which also means
* we need to supprot "\\" (see CEF spec for details).
* We return the string in *val, if val is non-null. In
* that case we allocate memory that the caller must free.
* This is necessary because there are potentially escape
* sequences inside the string.
*/
static int
cefGetHdrField(const char *const __restrict__ str,
const size_t strLen,
size_t *const __restrict__ offs,
char **val)
{
int r = 0;
size_t i = *offs;
assert(str[i] != '|');
while(i < strLen && str[i] != '|') {
if(str[i] == '\\') {
++i; /* skip esc char */
if(str[i] != '\\' && str[i] != '|')
FAIL(LN_WRONGPARSER);
}
++i; /* scan to next delimiter */
}
if(str[i] != '|')
FAIL(LN_WRONGPARSER);
const size_t iBegin = *offs;
/* success, persist */
*offs = i + 1;
if(val == NULL) {
r = 0;
goto done;
}
const size_t len = i - iBegin;
CHKN(*val = malloc(len + 1));
size_t iDst = 0;
for(size_t iSrc = 0 ; iSrc < len ; ++iSrc) {
if(str[iBegin+iSrc] == '\\')
++iSrc; /* we already checked above that this is OK! */
(*val)[iDst++] = str[iBegin+iSrc];
}
(*val)[iDst] = 0;
r = 0;
done:
return r;
}
/**
* Parser for ArcSight Common Event Format (CEF) version 0.
* added 2015-05-05 by rgerhards, v1.1.2
*/
PARSER(CEF)
size_t i = *offs;
char *vendor = NULL;
char *product = NULL;
char *version = NULL;
char *sigID = NULL;
char *name = NULL;
char *severity = NULL;
/* minumum header: "CEF:0|x|x|x|x|x|x|" --> 17 chars */
if(strLen < i + 17 ||
str[i] != 'C' ||
str[i+1] != 'E' ||
str[i+2] != 'F' ||
str[i+3] != ':' ||
str[i+4] != '0' ||
str[i+5] != '|'
) FAIL(LN_WRONGPARSER);
i += 6; /* position on '|' */
CHKR(cefGetHdrField(str, strLen, &i, (value == NULL) ? NULL : &vendor));
CHKR(cefGetHdrField(str, strLen, &i, (value == NULL) ? NULL : &product));
CHKR(cefGetHdrField(str, strLen, &i, (value == NULL) ? NULL : &version));
CHKR(cefGetHdrField(str, strLen, &i, (value == NULL) ? NULL : &sigID));
CHKR(cefGetHdrField(str, strLen, &i, (value == NULL) ? NULL : &name));
CHKR(cefGetHdrField(str, strLen, &i, (value == NULL) ? NULL : &severity));
++i; /* skip over terminal '|' */
/* OK, we now know we have a good header. Now, we need
* to process extensions.
* This time, we do NOT pre-process the extension, but rather
* persist them directly to JSON. This is contrary to other
* parsers, but as the CEF header is pretty unique, this time
* it is exteremely unlike we will get a no-match during
* extension processing. Even if so, nothing bad happens, as
* the extracted data is discarded. But the regular case saves
* us processing time and complexity. The only time when we
* cannot directly process it is when the caller asks us not
* to persist the data. So this must be handled differently.
*/
size_t iBeginExtensions = i;
CHKR(cefParseExtensions(str, strLen, &i, NULL));
/* success, persist */
*parsed = *offs - i;
r = 0; /* success */
if(value != NULL) {
CHKN(*value = json_object_new_object());
json_object *json;
CHKN(json = json_object_new_string(vendor));
json_object_object_add(*value, "DeviceVendor", json);
CHKN(json = json_object_new_string(product));
json_object_object_add(*value, "DeviceProduct", json);
CHKN(json = json_object_new_string(version));
json_object_object_add(*value, "DeviceVersion", json);
CHKN(json = json_object_new_string(sigID));
json_object_object_add(*value, "SignatureID", json);
CHKN(json = json_object_new_string(name));
json_object_object_add(*value, "Name", json);
CHKN(json = json_object_new_string(severity));
json_object_object_add(*value, "Severity", json);
json_object *jext;
CHKN(jext = json_object_new_object());
json_object_object_add(*value, "Extensions", jext);
i = iBeginExtensions;
cefParseExtensions(str, strLen, &i, jext);
}
done:
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
value = NULL;
}
free(vendor);
free(product);
free(version);
free(sigID);
free(name);
free(severity);
return r;
}
/**
* Parser for Checkpoint LEA on-disk format.
* added 2015-06-18 by rgerhards, v1.1.2
*/
PARSER(CheckpointLEA)
size_t i = *offs;
size_t iName, lenName;
size_t iValue, lenValue;
int foundFields = 0;
char *name = NULL;
char *val = NULL;
while(i < strLen) {
while(i < strLen && str[i] == ' ') /* skip leading SP */
++i;
if(i == strLen) { /* OK if just trailing space */
if(foundFields == 0)
FAIL(LN_WRONGPARSER);
break; /* we are done with the loop, all processed */
} else {
++foundFields;
}
iName = i;
/* TODO: do a stricter check? ... but we don't have a spec */
while(i < strLen && str[i] != ':') {
++i;
}
if(i+1 >= strLen || str[i] != ':')
FAIL(LN_WRONGPARSER);
lenName = i - iName;
++i; /* skip ':' */
while(i < strLen && str[i] == ' ') /* skip leading SP */
++i;
iValue = i;
while(i < strLen && str[i] != ';') {
++i;
}
if(i+1 > strLen || str[i] != ';')
FAIL(LN_WRONGPARSER);
lenValue = i - iValue;
++i; /* skip ';' */
if(value != NULL) {
CHKN(name = malloc(sizeof(char) * (lenName + 1)));
memcpy(name, str+iName, lenName);
name[lenName] = '\0';
CHKN(val = malloc(sizeof(char) * (lenValue + 1)));
memcpy(val, str+iValue, lenValue);
val[lenValue] = '\0';
if(*value == NULL)
CHKN(*value = json_object_new_object());
json_object *json;
CHKN(json = json_object_new_string(val));
json_object_object_add(*value, name, json);
free(name); name = NULL;
free(val); val = NULL;
}
}
/* success, persist */
*parsed = *offs - i;
r = 0; /* success */
done:
free(name);
free(val);
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
value = NULL;
}
return r;
}
liblognorm-2.0.6/src/annot.h 0000644 0001750 0001750 00000011524 13273030617 012670 0000000 0000000 /**
* @file annot.h
* @brief The annotation set object
* @class ln_annot annot.h
*//*
* Copyright 2011 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is meant to be included by applications using liblognorm.
* For lognorm library files themselves, include "lognorm.h".
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_ANNOT_H_INCLUDED
#define LIBLOGNORM_ANNOT_H_INCLUDED
#include
typedef struct ln_annotSet_s ln_annotSet;
typedef struct ln_annot_s ln_annot;
typedef struct ln_annot_op_s ln_annot_op;
typedef enum {ln_annot_ADD=0, ln_annot_RM=1} ln_annot_opcode;
/**
* List of annotation operations.
*/
struct ln_annot_op_s {
ln_annot_op *next;
ln_annot_opcode opc; /**< opcode */
es_str_t *name;
es_str_t *value;
};
/**
* annotation object
*/
struct ln_annot_s {
ln_annot *next; /**< used for chaining annotations */
es_str_t *tag; /**< tag associated for this annotation */
ln_annot_op *oproot;
};
/**
* annotation set object
*
* Note: we do not (yet) use a hash table. However, performance should
* be gained by pre-processing rules so that tags directly point into
* the annotation. This is even faster than hash table access.
*/
struct ln_annotSet_s {
ln_annot *aroot;
ln_ctx ctx; /**< save our context for easy dbgprintf et al... */
};
/* Methods */
/**
* Allocates and initializes a new annotation set.
* @memberof ln_annot
*
* @param[in] ctx current library context. This MUST match the
* context of the parent.
*
* @return pointer to new node or NULL on error
*/
ln_annotSet* ln_newAnnotSet(ln_ctx ctx);
/**
* Free annotation set and destruct all members.
* @memberof ln_annot
*
* @param[in] tree pointer to annot to free
*/
void ln_deleteAnnotSet(ln_annotSet *as);
/**
* Find annotation inside set based on given tag name.
* @memberof ln_annot
*
* @param[in] as annotation set
* @param[in] tag tag name to look for
*
* @returns NULL if not found, ptr to object otherwise
*/
ln_annot* ln_findAnnot(ln_annotSet *as, es_str_t *tag);
/**
* Add annotation to set.
* If an annotation associated with this tag already exists, these
* are combined. If not, a new annotation is added. Note that the
* caller must not access any of the objects passed in to this method
* after it has finished (objects may become deallocated during the
* method).
* @memberof ln_annot
*
* @param[in] as annotation set
* @param[in] annot annotation to add
*
* @returns 0 on success, something else otherwise
*/
int ln_addAnnotToSet(ln_annotSet *as, ln_annot *annot);
/**
* Allocates and initializes a new annotation.
* The tag passed in identifies the new annotation. The caller
* no longer owns the tag string after calling this method, so
* it must not access the same copy when the method returns.
* @memberof ln_annot
*
* @param[in] tag tag associated to annot (must not be NULL)
* @return pointer to new node or NULL on error
*/
ln_annot* ln_newAnnot(es_str_t *tag);
/**
* Free annotation and destruct all members.
* @memberof ln_annot
*
* @param[in] tree pointer to annot to free
*/
void ln_deleteAnnot(ln_annot *annot);
/**
* Add an operation to the annotation set.
* The operation description will be added as entry.
* @memberof ln_annot
*
* @param[in] annot pointer to annot to modify
* @param[in] op operation
* @param[in] name name of field, must NOT be re-used by caller
* @param[in] value value of field, may be NULL (e.g. in remove operation),
* must NOT be re-used by caller
* @returns 0 on success, something else otherwise
*/
int ln_addAnnotOp(ln_annot *anot, ln_annot_opcode opc, es_str_t *name, es_str_t *value);
/**
* Annotate an event.
* This adds annotations based on the event's tagbucket.
* @memberof ln_annot
*
* @param[in] ctx current context
* @param[in] event event to annotate (updated with anotations on exit)
* @returns 0 on success, something else otherwise
*/
int ln_annotate(ln_ctx ctx, struct json_object *json, struct json_object *tags);
#endif /* #ifndef LOGNORM_ANNOT_H_INCLUDED */
liblognorm-2.0.6/src/annot.c 0000644 0001750 0001750 00000012310 13273030617 012655 0000000 0000000 /**
* @file annot.c
* @brief Implementation of the annotation set object.
* @class ln_annot annot.h
*//*
* Copyright 2011 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Sannott, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include
#include "lognorm.h"
#include "samp.h"
#include "annot.h"
#include "internal.h"
ln_annotSet*
ln_newAnnotSet(ln_ctx ctx)
{
ln_annotSet *as;
if((as = calloc(1, sizeof(struct ln_annotSet_s))) == NULL)
goto done;
as->ctx = ctx;
done: return as;
}
void
ln_deleteAnnotSet(ln_annotSet *as)
{
ln_annot *node, *nextnode;
if(as == NULL)
goto done;
for(node = as->aroot; node != NULL; node = nextnode) {
nextnode = node->next;
ln_deleteAnnot(node);
}
free(as);
done: return;
}
ln_annot*
ln_findAnnot(ln_annotSet *as, es_str_t *tag)
{
ln_annot *annot;
if(as == NULL) {
annot = NULL;
goto done;
}
for( annot = as->aroot
; annot != NULL && es_strcmp(annot->tag, tag)
; annot = annot->next) {
; /* do nothing, just search... */
}
done: return annot;
}
/**
* Combine two annotations.
* @param[in] annot currently existing and surviving annotation
* @param[in] add annotation to be added. This will be destructed
* as part of the process.
* @returns 0 if ok, something else otherwise
*/
static int
ln_combineAnnot(ln_annot *annot, ln_annot *add)
{
int r = 0;
ln_annot_op *op, *nextop;
for(op = add->oproot; op != NULL; op = nextop) {
CHKR(ln_addAnnotOp(annot, op->opc, op->name, op->value));
nextop = op->next;
free(op);
}
es_deleteStr(add->tag);
free(add);
done: return r;
}
int
ln_addAnnotToSet(ln_annotSet *as, ln_annot *annot)
{
int r = 0;
ln_annot *aexist;
assert(annot->tag != NULL);
aexist = ln_findAnnot(as, annot->tag);
if(aexist == NULL) {
/* does not yet exist, simply store new annot */
annot->next = as->aroot;
as->aroot = annot;
} else { /* annotation already exists, combine */
r = ln_combineAnnot(aexist, annot);
}
return r;
}
ln_annot*
ln_newAnnot(es_str_t *tag)
{
ln_annot *annot;
if((annot = calloc(1, sizeof(struct ln_annot_s))) == NULL)
goto done;
annot->tag = tag;
done: return annot;
}
void
ln_deleteAnnot(ln_annot *annot)
{
ln_annot_op *op, *nextop;
if(annot == NULL)
goto done;
es_deleteStr(annot->tag);
for(op = annot->oproot; op != NULL; op = nextop) {
es_deleteStr(op->name);
if(op->value != NULL)
es_deleteStr(op->value);
nextop = op->next;
free(op);
}
free(annot);
done: return;
}
int
ln_addAnnotOp(ln_annot *annot, ln_annot_opcode opc, es_str_t *name, es_str_t *value)
{
int r = -1;
ln_annot_op *node;
if((node = calloc(1, sizeof(struct ln_annot_op_s))) == NULL)
goto done;
node->opc = opc;
node->name = name;
node->value = value;
if(annot->oproot != NULL) {
node->next = annot->oproot;
}
annot->oproot = node;
r = 0;
done: return r;
}
/* annotate the event with a specific tag. helper to keep code
* small and easy to follow.
*/
static inline int
ln_annotateEventWithTag(ln_ctx ctx, struct json_object *json, es_str_t *tag)
{
int r=0;
ln_annot *annot;
ln_annot_op *op;
struct json_object *field;
char *cstr;
if (NULL == (annot = ln_findAnnot(ctx->pas, tag)))
goto done;
for(op = annot->oproot ; op != NULL ; op = op->next) {
if(op->opc == ln_annot_ADD) {
CHKN(cstr = ln_es_str2cstr(&op->value));
CHKN(field = json_object_new_string(cstr));
CHKN(cstr = ln_es_str2cstr(&op->name));
json_object_object_add(json, cstr, field);
} else {
// TODO: implement
}
}
done: return r;
}
int
ln_annotate(ln_ctx ctx, struct json_object *json, struct json_object *tagbucket)
{
int r = 0;
es_str_t *tag;
struct json_object *tagObj;
const char *tagCstr;
int i;
ln_dbgprintf(ctx, "ln_annotate called [aroot=%p]", ctx->pas->aroot);
/* shortcut: terminate immediately if nothing to do... */
if(ctx->pas->aroot == NULL)
goto done;
/* iterate over tagbucket */
for (i = json_object_array_length(tagbucket) - 1; i >= 0; i--) {
CHKN(tagObj = json_object_array_get_idx(tagbucket, i));
CHKN(tagCstr = json_object_get_string(tagObj));
ln_dbgprintf(ctx, "ln_annotate, current tag %d, cstr %s", i, tagCstr);
CHKN(tag = es_newStrFromCStr(tagCstr, strlen(tagCstr)));
CHKR(ln_annotateEventWithTag(ctx, json, tag));
es_deleteStr(tag);
}
done: return r;
}
liblognorm-2.0.6/src/samp.h 0000644 0001750 0001750 00000003540 13273030617 012510 0000000 0000000 /**
* @file samples.h
* @brief Object to process log samples.
* @author Rainer Gerhards
*
* This object handles log samples, and in actual log sample files.
* It co-operates with the ptree object to build the actual parser tree.
*//*
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2015 by Rainer Gerhards and Adiscon GmbH.
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General PublicCH License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_SAMPLES_H_INCLUDED
#define LIBLOGNORM_SAMPLES_H_INCLUDED
#include /* we need es_size_t */
#include
/**
* A single log sample.
*/
struct ln_samp {
es_str_t *msg;
};
void ln_sampFree(ln_ctx ctx, struct ln_samp *samp);
int ln_sampLoad(ln_ctx ctx, const char *file);
int ln_sampLoadFromString(ln_ctx ctx, const char *string);
/* dual-use funtions for v1 engine */
void ln_sampSkipCommentLine(ln_ctx ctx, FILE * const __restrict__ repo, const char **inpbuf);
int ln_sampChkRunawayRule(ln_ctx ctx, FILE *const __restrict__ repo, const char **inpbuf);
#endif /* #ifndef LIBLOGNORM_SAMPLES_H_INCLUDED */
liblognorm-2.0.6/src/lognorm-features.h 0000644 0001750 0001750 00000000057 13370251162 015037 0000000 0000000 #if 0
#define LOGNORM_REGEX_SUPPORTED 1
#endif
liblognorm-2.0.6/src/pdag.c 0000644 0001750 0001750 00000144210 13370250152 012452 0000000 0000000 /**
* @file pdag.c
* @brief Implementation of the parse dag object.
* @class ln_pdag pdag.h
*//*
* Copyright 2015 by Rainer Gerhards and Adiscon GmbH.
*
* Released under ASL 2.0.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include
#include "liblognorm.h"
#include "v1_liblognorm.h"
#include "v1_ptree.h"
#include "lognorm.h"
#include "samp.h"
#include "pdag.h"
#include "annot.h"
#include "internal.h"
#include "parser.h"
#include "helpers.h"
void ln_displayPDAGComponentAlternative(struct ln_pdag *dag, int level);
void ln_displayPDAGComponent(struct ln_pdag *dag, int level);
#ifdef ADVANCED_STATS
uint64_t advstats_parsers_called = 0;
uint64_t advstats_parsers_success = 0;
int advstats_max_pathlen = 0;
int advstats_pathlens[ADVSTATS_MAX_ENTITIES];
int advstats_max_backtracked = 0;
int advstats_backtracks[ADVSTATS_MAX_ENTITIES];
int advstats_max_parser_calls = 0;
int advstats_parser_calls[ADVSTATS_MAX_ENTITIES];
int advstats_max_lit_parser_calls = 0;
int advstats_lit_parser_calls[ADVSTATS_MAX_ENTITIES];
#endif
/* parser lookup table
* This is a memory- and cache-optimized way of calling parsers.
* VERY IMPORTANT: the initialization must be done EXACTLY in the
* order of parser IDs (also see comment in pdag.h).
*
* Rough guideline for assigning priorites:
* 0 is highest, 255 lowest. 255 should be reserved for things that
* *really* should only be run as last resort --> rest. Also keep in
* mind that the user-assigned priority is put in the upper 24 bits, so
* parser-specific priorities only count when the user has assigned
* no priorities (which is expected to be common) or user-assigned
* priorities are equal for some parsers.
*/
#ifdef ADVANCED_STATS
#define PARSER_ENTRY_NO_DATA(identifier, parser, prio) \
{ identifier, prio, NULL, ln_v2_parse##parser, NULL, 0, 0 }
#define PARSER_ENTRY(identifier, parser, prio) \
{ identifier, prio, ln_construct##parser, ln_v2_parse##parser, ln_destruct##parser, 0, 0 }
#else
#define PARSER_ENTRY_NO_DATA(identifier, parser, prio) \
{ identifier, prio, NULL, ln_v2_parse##parser, NULL }
#define PARSER_ENTRY(identifier, parser, prio) \
{ identifier, prio, ln_construct##parser, ln_v2_parse##parser, ln_destruct##parser }
#endif
static struct ln_parser_info parser_lookup_table[] = {
PARSER_ENTRY("literal", Literal, 4),
PARSER_ENTRY("repeat", Repeat, 4),
PARSER_ENTRY("date-rfc3164", RFC3164Date, 8),
PARSER_ENTRY("date-rfc5424", RFC5424Date, 8),
PARSER_ENTRY("number", Number, 16),
PARSER_ENTRY("float", Float, 16),
PARSER_ENTRY("hexnumber", HexNumber, 16),
PARSER_ENTRY_NO_DATA("kernel-timestamp", KernelTimestamp, 16),
PARSER_ENTRY_NO_DATA("whitespace", Whitespace, 4),
PARSER_ENTRY_NO_DATA("ipv4", IPv4, 4),
PARSER_ENTRY_NO_DATA("ipv6", IPv6, 4),
PARSER_ENTRY_NO_DATA("word", Word, 32),
PARSER_ENTRY_NO_DATA("alpha", Alpha, 32),
PARSER_ENTRY_NO_DATA("rest", Rest, 255),
PARSER_ENTRY_NO_DATA("op-quoted-string", OpQuotedString, 64),
PARSER_ENTRY_NO_DATA("quoted-string", QuotedString, 64),
PARSER_ENTRY_NO_DATA("date-iso", ISODate, 8),
PARSER_ENTRY_NO_DATA("time-24hr", Time24hr, 8),
PARSER_ENTRY_NO_DATA("time-12hr", Time12hr, 8),
PARSER_ENTRY_NO_DATA("duration", Duration, 16),
PARSER_ENTRY_NO_DATA("cisco-interface-spec", CiscoInterfaceSpec, 4),
PARSER_ENTRY_NO_DATA("name-value-list", NameValue, 8),
PARSER_ENTRY_NO_DATA("json", JSON, 4),
PARSER_ENTRY_NO_DATA("cee-syslog", CEESyslog, 4),
PARSER_ENTRY_NO_DATA("mac48", MAC48, 16),
PARSER_ENTRY_NO_DATA("cef", CEF, 4),
PARSER_ENTRY_NO_DATA("v2-iptables", v2IPTables, 4),
PARSER_ENTRY("checkpoint-lea", CheckpointLEA, 4),
PARSER_ENTRY("string-to", StringTo, 32),
PARSER_ENTRY("char-to", CharTo, 32),
PARSER_ENTRY("char-sep", CharSeparated, 32),
PARSER_ENTRY("string", String, 32)
};
#define NPARSERS (sizeof(parser_lookup_table)/sizeof(struct ln_parser_info))
#define DFLT_USR_PARSER_PRIO 30000 /**< default priority if user has not specified it */
static inline const char *
parserName(const prsid_t id)
{
const char *name;
if(id == PRS_CUSTOM_TYPE)
name = "USER-DEFINED";
else
name = parser_lookup_table[id].name;
return name;
}
prsid_t
ln_parserName2ID(const char *const __restrict__ name)
{
unsigned i;
for( i = 0
; i < sizeof(parser_lookup_table) / sizeof(struct ln_parser_info)
; ++i) {
if(!strcmp(parser_lookup_table[i].name, name)) {
return i;
}
}
return PRS_INVALID;
}
/* find type pdag in table. If "bAdd" is set, add it if not
* already present, a new entry will be added.
* Returns NULL on error, ptr to type pdag entry otherwise
*/
struct ln_type_pdag *
ln_pdagFindType(ln_ctx ctx, const char *const __restrict__ name, const int bAdd)
{
struct ln_type_pdag *td = NULL;
int i;
LN_DBGPRINTF(ctx, "ln_pdagFindType, name '%s', bAdd: %d, nTypes %d",
name, bAdd, ctx->nTypes);
for(i = 0 ; i < ctx->nTypes ; ++i) {
if(!strcmp(ctx->type_pdags[i].name, name)) {
td = ctx->type_pdags + i;
goto done;
}
}
if(!bAdd) {
LN_DBGPRINTF(ctx, "custom type '%s' not found", name);
goto done;
}
/* type does not yet exist -- create entry */
LN_DBGPRINTF(ctx, "custom type '%s' does not yet exist, adding...", name);
struct ln_type_pdag *newarr;
newarr = realloc(ctx->type_pdags, sizeof(struct ln_type_pdag) * (ctx->nTypes+1));
if(newarr == NULL) {
LN_DBGPRINTF(ctx, "ln_pdagFindTypeAG: alloc newarr failed");
goto done;
}
ctx->type_pdags = newarr;
td = ctx->type_pdags + ctx->nTypes;
++ctx->nTypes;
td->name = strdup(name);
td->pdag = ln_newPDAG(ctx);
done:
return td;
}
/* we clear some multiple times, but as long as we have no loops
* (dag!) we have no real issue.
*/
static void
ln_pdagComponentClearVisited(struct ln_pdag *const dag)
{
dag->flags.visited = 0;
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *prs = dag->parsers+i;
ln_pdagComponentClearVisited(prs->node);
}
}
static void
ln_pdagClearVisited(ln_ctx ctx)
{
for(int i = 0 ; i < ctx->nTypes ; ++i)
ln_pdagComponentClearVisited(ctx->type_pdags[i].pdag);
ln_pdagComponentClearVisited(ctx->pdag);
}
/**
* Process a parser defintion. Note that a single defintion can potentially
* contain many parser instances.
* @return parser node ptr or NULL (on error)
*/
ln_parser_t*
ln_newParser(ln_ctx ctx,
json_object *prscnf)
{
ln_parser_t *node = NULL;
json_object *json;
const char *val;
prsid_t prsid;
struct ln_type_pdag *custType = NULL;
const char *name = NULL;
const char *textconf = json_object_to_json_string(prscnf);
int assignedPrio = DFLT_USR_PARSER_PRIO;
int parserPrio;
json_object_object_get_ex(prscnf, "type", &json);
if(json == NULL) {
ln_errprintf(ctx, 0, "parser type missing in config: %s",
json_object_to_json_string(prscnf));
goto done;
}
val = json_object_get_string(json);
if(*val == '@') {
prsid = PRS_CUSTOM_TYPE;
custType = ln_pdagFindType(ctx, val, 0);
parserPrio = 16; /* hopefully relatively specific... */
if(custType == NULL) {
ln_errprintf(ctx, 0, "unknown user-defined type '%s'", val);
goto done;
}
} else {
prsid = ln_parserName2ID(val);
if(prsid == PRS_INVALID) {
ln_errprintf(ctx, 0, "invalid field type '%s'", val);
goto done;
}
parserPrio = parser_lookup_table[prsid].prio;
}
json_object_object_get_ex(prscnf, "name", &json);
if(json == NULL || !strcmp(json_object_get_string(json), "-")) {
name = NULL;
} else {
name = strdup(json_object_get_string(json));
}
json_object_object_get_ex(prscnf, "priority", &json);
if(json != NULL) {
assignedPrio = json_object_get_int(json);
}
LN_DBGPRINTF(ctx, "assigned priority is %d", assignedPrio);
/* we need to remove already processed items from the config, so
* that we can pass the remaining parameters to the parser.
*/
json_object_object_del(prscnf, "type");
json_object_object_del(prscnf, "priority");
if(name != NULL)
json_object_object_del(prscnf, "name");
/* got all data items */
if((node = calloc(1, sizeof(ln_parser_t))) == NULL) {
LN_DBGPRINTF(ctx, "lnNewParser: alloc node failed");
free((void*)name);
goto done;
}
node->node = NULL;
node->prio = ((assignedPrio << 8) & 0xffffff00) | (parserPrio & 0xff);
node->name = name;
node->prsid = prsid;
node->conf = strdup(textconf);
if(prsid == PRS_CUSTOM_TYPE) {
node->custTypeIdx = custType - ctx->type_pdags;
} else {
if(parser_lookup_table[prsid].construct != NULL) {
parser_lookup_table[prsid].construct(ctx, prscnf, &node->parser_data);
}
}
done:
return node;
}
struct ln_pdag*
ln_newPDAG(ln_ctx ctx)
{
struct ln_pdag *dag;
if((dag = calloc(1, sizeof(struct ln_pdag))) == NULL)
goto done;
dag->refcnt = 1;
dag->ctx = ctx;
ctx->nNodes++;
done: return dag;
}
/* note: we must NOT free the parser itself, because
* it is stored inside a parser table (so no single
* alloc for the parser!).
*/
static void
pdagDeletePrs(ln_ctx ctx, ln_parser_t *const __restrict__ prs)
{
// TODO: be careful here: once we move to real DAG from tree, we
// cannot simply delete the next node! (refcount? something else?)
if(prs->node != NULL)
ln_pdagDelete(prs->node);
free((void*)prs->name);
free((void*)prs->conf);
if(prs->parser_data != NULL)
parser_lookup_table[prs->prsid].destruct(ctx, prs->parser_data);
}
void
ln_pdagDelete(struct ln_pdag *const __restrict__ pdag)
{
if(pdag == NULL)
goto done;
LN_DBGPRINTF(pdag->ctx, "delete %p[%d]: %s", pdag, pdag->refcnt, pdag->rb_id);
--pdag->refcnt;
if(pdag->refcnt > 0)
goto done;
if(pdag->tags != NULL)
json_object_put(pdag->tags);
for(int i = 0 ; i < pdag->nparsers ; ++i) {
pdagDeletePrs(pdag->ctx, pdag->parsers+i);
}
free(pdag->parsers);
free((void*)pdag->rb_id);
free((void*)pdag->rb_file);
free(pdag);
done: return;
}
/**
* pdag optimizer step: literal path compaction
*
* We compress as much as possible and evalute the path down to
* the first non-compressable element. Note that we must NOT
* compact those literals that are either terminal nodes OR
* contain names so that the literal is to be parsed out.
*/
static inline int
optLitPathCompact(ln_ctx ctx, ln_parser_t *prs)
{
int r = 0;
while(prs != NULL) {
/* note the NOT prefix in the condition below! */
if(!( prs->prsid == PRS_LITERAL
&& prs->name == NULL
&& prs->node->flags.isTerminal == 0
&& prs->node->refcnt == 1
&& prs->node->nparsers == 1
/* we need to do some checks on the child as well */
&& prs->node->parsers[0].prsid == PRS_LITERAL
&& prs->node->parsers[0].name == NULL
&& prs->node->parsers[0].node->refcnt == 1)
)
goto done;
/* ok, we have two compactable literals in a row, let's compact the nodes */
ln_parser_t *child_prs = prs->node->parsers;
LN_DBGPRINTF(ctx, "opt path compact: add %p to %p", child_prs, prs);
CHKR(ln_combineData_Literal(prs->parser_data, child_prs->parser_data));
ln_pdag *const node_del = prs->node;
prs->node = child_prs->node;
child_prs->node = NULL; /* remove, else this would be destructed! */
ln_pdagDelete(node_del);
}
done:
return r;
}
static int
qsort_parserCmp(const void *v1, const void *v2)
{
const ln_parser_t *const p1 = (const ln_parser_t *const) v1;
const ln_parser_t *const p2 = (const ln_parser_t *const) v2;
return p1->prio - p2->prio;
}
static int
ln_pdagComponentOptimize(ln_ctx ctx, struct ln_pdag *const dag)
{
int r = 0;
for(int i = 0 ; i < dag->nparsers ; ++i) { /* TODO: remove when confident enough */
ln_parser_t *prs = dag->parsers+i;
LN_DBGPRINTF(ctx, "pre sort, parser %d:%s[%d]", i, prs->name, prs->prio);
}
/* first sort parsers in priority order */
if(dag->nparsers > 1) {
qsort(dag->parsers, dag->nparsers, sizeof(ln_parser_t), qsort_parserCmp);
}
for(int i = 0 ; i < dag->nparsers ; ++i) { /* TODO: remove when confident enough */
ln_parser_t *prs = dag->parsers+i;
LN_DBGPRINTF(ctx, "post sort, parser %d:%s[%d]", i, prs->name, prs->prio);
}
/* now on to rest of processing */
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *prs = dag->parsers+i;
LN_DBGPRINTF(dag->ctx, "optimizing %p: field %d type '%s', name '%s': '%s':",
prs->node, i, parserName(prs->prsid), prs->name,
(prs->prsid == PRS_LITERAL) ? ln_DataForDisplayLiteral(dag->ctx, prs->parser_data)
: "UNKNOWN");
optLitPathCompact(ctx, prs);
ln_pdagComponentOptimize(ctx, prs->node);
}
return r;
}
static void
deleteComponentID(struct ln_pdag *const __restrict__ dag)
{
free((void*)dag->rb_id);
dag->rb_id = NULL;
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *prs = dag->parsers+i;
deleteComponentID(prs->node);
}
}
/* fixes rb_ids for this node as well as it predecessors.
* This is required if the ALTERNATIVE parser type is used,
* which will create component IDs for each of it's invocations.
* As such, we do not only fix the string, but know that all
* children also need fixning. We do this be simply deleting
* all of their rb_ids, as we know they will be visited again.
* Note: if we introduce the same situation by new functionality,
* we may need to review this code here as well. Also note
* that the component ID will not be 100% correct after our fix,
* because that ID could acutally be created by two sets of rules.
* But this is the best we can do.
*/
static void
fixComponentID(struct ln_pdag *const __restrict__ dag, const char *const new)
{
char *updated;
const char *const curr = dag->rb_id;
int i;
int len = (int) strlen(curr);
for(i = 0 ; i < len ; ++i){
if(curr[i] != new [i])
break;
}
if(i >= 1 && curr[i-1] == '%')
--i;
if(asprintf(&updated, "%.*s[%s|%s]", i, curr, curr+i, new+i) == -1)
goto done;
deleteComponentID(dag);
dag->rb_id = updated;
done: return;
}
/**
* Assign human-readable identifiers (names) to each node. These are
* later used in stats, debug output and whereever else this may make
* sense.
*/
static void
ln_pdagComponentSetIDs(ln_ctx ctx, struct ln_pdag *const dag, const char *prefix)
{
char *id = NULL;
if(prefix == NULL)
goto done;
if(dag->rb_id == NULL) {
dag->rb_id = strdup(prefix);
} else {
LN_DBGPRINTF(ctx, "rb_id already exists - fixing as good as "
"possible. This happens with ALTERNATIVE parser. "
"old: '%s', new: '%s'",
dag->rb_id, prefix);
fixComponentID(dag, prefix);
LN_DBGPRINTF(ctx, "\"fixed\" rb_id: %s", dag->rb_id);
prefix = dag->rb_id;
}
/* now on to rest of processing */
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *prs = dag->parsers+i;
if(prs->prsid == PRS_LITERAL) {
if(prs->name == NULL) {
if(asprintf(&id, "%s%s", prefix,
ln_DataForDisplayLiteral(dag->ctx, prs->parser_data)) == -1)
goto done;
} else {
if(asprintf(&id, "%s%%%s:%s:%s%%", prefix,
prs->name,
parserName(prs->prsid),
ln_DataForDisplayLiteral(dag->ctx, prs->parser_data)) == -1)
goto done;
}
} else {
if(asprintf(&id, "%s%%%s:%s%%", prefix,
prs->name ? prs->name : "-",
parserName(prs->prsid)) == -1)
goto done;
}
ln_pdagComponentSetIDs(ctx, prs->node, id);
free(id);
}
done: return;
}
/**
* Optimize the pdag.
* This includes all components.
*/
int
ln_pdagOptimize(ln_ctx ctx)
{
int r = 0;
for(int i = 0 ; i < ctx->nTypes ; ++i) {
LN_DBGPRINTF(ctx, "optimizing component %s\n", ctx->type_pdags[i].name);
ln_pdagComponentOptimize(ctx, ctx->type_pdags[i].pdag);
ln_pdagComponentSetIDs(ctx, ctx->type_pdags[i].pdag, "");
}
LN_DBGPRINTF(ctx, "optimizing main pdag component");
ln_pdagComponentOptimize(ctx, ctx->pdag);
LN_DBGPRINTF(ctx, "finished optimizing main pdag component");
ln_pdagComponentSetIDs(ctx, ctx->pdag, "");
LN_DBGPRINTF(ctx, "---AFTER OPTIMIZATION------------------");
ln_displayPDAG(ctx);
LN_DBGPRINTF(ctx, "=======================================");
return r;
}
#define LN_INTERN_PDAG_STATS_NPARSERS 100
/* data structure for pdag statistics */
struct pdag_stats {
int nodes;
int term_nodes;
int parsers;
int max_nparsers;
int nparsers_cnt[LN_INTERN_PDAG_STATS_NPARSERS];
int nparsers_100plus;
int *prs_cnt;
};
/**
* Recursive step of statistics gatherer.
*/
static int
ln_pdagStatsRec(ln_ctx ctx, struct ln_pdag *const dag, struct pdag_stats *const stats)
{
if(dag->flags.visited)
return 0;
dag->flags.visited = 1;
stats->nodes++;
if(dag->flags.isTerminal)
stats->term_nodes++;
if(dag->nparsers > stats->max_nparsers)
stats->max_nparsers = dag->nparsers;
if(dag->nparsers >= LN_INTERN_PDAG_STATS_NPARSERS)
stats->nparsers_100plus++;
else
stats->nparsers_cnt[dag->nparsers]++;
stats->parsers += dag->nparsers;
int max_path = 0;
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *prs = dag->parsers+i;
if(prs->prsid != PRS_CUSTOM_TYPE)
stats->prs_cnt[prs->prsid]++;
const int path_len = ln_pdagStatsRec(ctx, prs->node, stats);
if(path_len > max_path)
max_path = path_len;
}
return max_path + 1;
}
static void
ln_pdagStatsExtended(ln_ctx ctx, struct ln_pdag *const dag, FILE *const fp, int level)
{
char indent[2048];
if(level > 1023)
level = 1023;
memset(indent, ' ', level * 2);
indent[level * 2] = '\0';
if(dag->stats.called > 0) {
fprintf(fp, "%u, %u, %s\n",
dag->stats.called,
dag->stats.backtracked,
dag->rb_id);
}
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *const prs = dag->parsers+i;
if(prs->node->stats.called > 0) {
ln_pdagStatsExtended(ctx, prs->node, fp, level+1);
}
}
}
/**
* Gather pdag statistics for a *specific* pdag.
*
* Data is sent to given file ptr.
*/
static void
ln_pdagStats(ln_ctx ctx, struct ln_pdag *const dag, FILE *const fp, const int extendedStats)
{
struct pdag_stats *const stats = calloc(1, sizeof(struct pdag_stats));
stats->prs_cnt = calloc(NPARSERS, sizeof(int));
//ln_pdagClearVisited(ctx);
const int longest_path = ln_pdagStatsRec(ctx, dag, stats);
fprintf(fp, "nodes.............: %4d\n", stats->nodes);
fprintf(fp, "terminal nodes....: %4d\n", stats->term_nodes);
fprintf(fp, "parsers entries...: %4d\n", stats->parsers);
fprintf(fp, "longest path......: %4d\n", longest_path);
fprintf(fp, "Parser Type Counts:\n");
for(prsid_t i = 0 ; i < NPARSERS ; ++i) {
if(stats->prs_cnt[i] != 0)
fprintf(fp, "\t%20s: %d\n", parserName(i), stats->prs_cnt[i]);
}
int pp = 0;
fprintf(fp, "Parsers per Node:\n");
fprintf(fp, "\tmax:\t%4d\n", stats->max_nparsers);
for(int i = 0 ; i < 100 ; ++i) {
pp += stats->nparsers_cnt[i];
if(stats->nparsers_cnt[i] != 0)
fprintf(fp, "\t%d:\t%4d\n", i, stats->nparsers_cnt[i]);
}
free(stats->prs_cnt);
free(stats);
if(extendedStats) {
fprintf(fp, "Usage Statistics:\n"
"-----------------\n");
fprintf(fp, "called, backtracked, rule\n");
ln_pdagComponentClearVisited(dag);
ln_pdagStatsExtended(ctx, dag, fp, 0);
}
}
/**
* Gather and output pdag statistics for the full pdag (ctx)
* including all disconnected components (type defs).
*
* Data is sent to given file ptr.
*/
void
ln_fullPdagStats(ln_ctx ctx, FILE *const fp, const int extendedStats)
{
if(ctx->ptree != NULL) {
/* we need to handle the old cruft */
ln_fullPTreeStats(ctx, fp, extendedStats);
return;
}
fprintf(fp, "User-Defined Types\n"
"==================\n");
fprintf(fp, "number types: %d\n", ctx->nTypes);
for(int i = 0 ; i < ctx->nTypes ; ++i)
fprintf(fp, "type: %s\n", ctx->type_pdags[i].name);
for(int i = 0 ; i < ctx->nTypes ; ++i) {
fprintf(fp, "\n"
"type PDAG: %s\n"
"----------\n", ctx->type_pdags[i].name);
ln_pdagStats(ctx, ctx->type_pdags[i].pdag, fp, extendedStats);
}
fprintf(fp, "\n"
"Main PDAG\n"
"=========\n");
ln_pdagStats(ctx, ctx->pdag, fp, extendedStats);
#ifdef ADVANCED_STATS
const uint64_t parsers_failed = advstats_parsers_called - advstats_parsers_success;
fprintf(fp, "\n"
"Advanced Runtime Stats\n"
"======================\n");
fprintf(fp, "These are actual number from analyzing the control flow "
"at runtime.\n");
fprintf(fp, "Note that literal matching is also done via parsers. As such, \n"
"it is expected that fail rates increase with the size of the \n"
"rule base.\n");
fprintf(fp, "\n");
fprintf(fp, "Parser Calls:\n");
fprintf(fp, "total....: %10" PRIu64 "\n", advstats_parsers_called);
fprintf(fp, "succesful: %10" PRIu64 "\n", advstats_parsers_success);
fprintf(fp, "failed...: %10" PRIu64 " [%d%%]\n",
parsers_failed,
(int) ((parsers_failed * 100) / advstats_parsers_called) );
fprintf(fp, "\nIndividual Parser Calls "
"(never called parsers are not shown):\n");
for( size_t i = 0
; i < sizeof(parser_lookup_table) / sizeof(struct ln_parser_info)
; ++i) {
if(parser_lookup_table[i].called > 0) {
const uint64_t failed = parser_lookup_table[i].called
- parser_lookup_table[i].success;
fprintf(fp, "%20s: %10" PRIu64 " [%5.2f%%] "
"success: %10" PRIu64 " [%5.1f%%] "
"fail: %10" PRIu64 " [%5.1f%%]"
"\n",
parser_lookup_table[i].name,
parser_lookup_table[i].called,
(float)(parser_lookup_table[i].called * 100)
/ advstats_parsers_called,
parser_lookup_table[i].success,
(float)(parser_lookup_table[i].success * 100)
/ parser_lookup_table[i].called,
failed,
(float)(failed * 100)
/ parser_lookup_table[i].called
);
}
}
uint64_t total_len;
uint64_t total_cnt;
fprintf(fp, "\n");
fprintf(fp, "\n"
"Path Length Statistics\n"
"----------------------\n"
"The regular path length is the number of nodes being visited,\n"
"where each node potentially evaluates several parsers. The\n"
"parser call statistic is the number of parsers called along\n"
"the path. That number is higher, as multiple parsers may be\n"
"called at each node. The number of literal parser calls is\n"
"given explicitely, as they use almost no time to process.\n"
"\n"
);
total_len = 0;
total_cnt = 0;
fprintf(fp, "Path Length\n");
for(int i = 0 ; i < ADVSTATS_MAX_ENTITIES ; ++i) {
if(advstats_pathlens[i] > 0 ) {
fprintf(fp, "%3d: %d\n", i, advstats_pathlens[i]);
total_len += i * advstats_pathlens[i];
total_cnt += advstats_pathlens[i];
}
}
fprintf(fp, "avg: %f\n", (double) total_len / (double) total_cnt);
fprintf(fp, "max: %d\n", advstats_max_pathlen);
fprintf(fp, "\n");
total_len = 0;
total_cnt = 0;
fprintf(fp, "Nbr Backtracked\n");
for(int i = 0 ; i < ADVSTATS_MAX_ENTITIES ; ++i) {
if(advstats_backtracks[i] > 0 ) {
fprintf(fp, "%3d: %d\n", i, advstats_backtracks[i]);
total_len += i * advstats_backtracks[i];
total_cnt += advstats_backtracks[i];
}
}
fprintf(fp, "avg: %f\n", (double) total_len / (double) total_cnt);
fprintf(fp, "max: %d\n", advstats_max_backtracked);
fprintf(fp, "\n");
/* we calc some stats while we output */
total_len = 0;
total_cnt = 0;
fprintf(fp, "Parser Calls\n");
for(int i = 0 ; i < ADVSTATS_MAX_ENTITIES ; ++i) {
if(advstats_parser_calls[i] > 0 ) {
fprintf(fp, "%3d: %d\n", i, advstats_parser_calls[i]);
total_len += i * advstats_parser_calls[i];
total_cnt += advstats_parser_calls[i];
}
}
fprintf(fp, "avg: %f\n", (double) total_len / (double) total_cnt);
fprintf(fp, "max: %d\n", advstats_max_parser_calls);
fprintf(fp, "\n");
total_len = 0;
total_cnt = 0;
fprintf(fp, "LITERAL Parser Calls\n");
for(int i = 0 ; i < ADVSTATS_MAX_ENTITIES ; ++i) {
if(advstats_lit_parser_calls[i] > 0 ) {
fprintf(fp, "%3d: %d\n", i, advstats_lit_parser_calls[i]);
total_len += i * advstats_lit_parser_calls[i];
total_cnt += advstats_lit_parser_calls[i];
}
}
fprintf(fp, "avg: %f\n", (double) total_len / (double) total_cnt);
fprintf(fp, "max: %d\n", advstats_max_lit_parser_calls);
fprintf(fp, "\n");
#endif
}
/**
* Check if the provided dag is a leaf. This means that it
* does not contain any subdags.
* @return 1 if it is a leaf, 0 otherwise
*/
static inline int
isLeaf(struct ln_pdag *dag)
{
return dag->nparsers == 0 ? 1 : 0;
}
/**
* Add a parser instance to the pdag at the current position.
*
* @param[in] ctx
* @param[in] prscnf json parser config *object* (no array!)
* @param[in] pdag current pdag position (to which parser is to be added)
* @param[in/out] nextnode contains point to the next node, either
* an existing one or one newly created.
*
* The nextnode parameter permits to use this function to create
* multiple parsers alternative parsers with a single run. To do so,
* set nextnode=NULL on first call. On successive calls, keep the
* value. If a value is present, we will not accept non-identical
* parsers which point to different nodes - this will result in an
* error.
*
* IMPORTANT: the caller is responsible to update its pdag pointer
* to the nextnode value when he is done adding parsers.
*
* If a parser of the same type with identical data already exists,
* it is "resued", which means the function is effectively used to
* walk the path. This is used during parser construction to
* navigate to new parts of the pdag.
*/
static int
ln_pdagAddParserInstance(ln_ctx ctx,
json_object *const __restrict__ prscnf,
struct ln_pdag *const __restrict__ pdag,
struct ln_pdag **nextnode)
{
int r;
ln_parser_t *newtab;
LN_DBGPRINTF(ctx, "ln_pdagAddParserInstance: %s, nextnode %p",
json_object_to_json_string(prscnf), *nextnode);
ln_parser_t *const parser = ln_newParser(ctx, prscnf);
CHKN(parser);
LN_DBGPRINTF(ctx, "pdag: %p, parser %p", pdag, parser);
/* check if we already have this parser, if so, merge
*/
int i;
for(i = 0 ; i < pdag->nparsers ; ++i) {
LN_DBGPRINTF(ctx, "parser comparison:\n%s\n%s", pdag->parsers[i].conf, parser->conf);
if( pdag->parsers[i].prsid == parser->prsid
&& !strcmp(pdag->parsers[i].conf, parser->conf)) {
// FIXME: the current ->conf object is depending on
// the order of json elements. We should do a JSON
// comparison (a bit more complex). For now, it
// works like we do it now.
// FIXME: if nextnode is set, check we can actually combine,
// else err out
*nextnode = pdag->parsers[i].node;
r = 0;
LN_DBGPRINTF(ctx, "merging with pdag %p", pdag);
pdagDeletePrs(ctx, parser); /* no need for data items */
goto done;
}
}
/* if we reach this point, we have a new parser type */
if(*nextnode == NULL) {
CHKN(*nextnode = ln_newPDAG(ctx)); /* we need a new node */
} else {
(*nextnode)->refcnt++;
}
parser->node = *nextnode;
newtab = realloc(pdag->parsers, (pdag->nparsers+1) * sizeof(ln_parser_t));
CHKN(newtab);
pdag->parsers = newtab;
memcpy(pdag->parsers+pdag->nparsers, parser, sizeof(ln_parser_t));
pdag->nparsers++;
r = 0;
done:
free(parser);
return r;
}
static int ln_pdagAddParserInternal(ln_ctx ctx, struct ln_pdag **pdag, const int mode, json_object *const prscnf,
struct ln_pdag **nextnode);
/**
* add parsers to current pdag. This is used
* to add parsers stored in an array. The mode specifies
* how parsers shall be added.
*/
#define PRS_ADD_MODE_SEQ 0
#define PRS_ADD_MODE_ALTERNATIVE 1
static int
ln_pdagAddParsers(ln_ctx ctx,
json_object *const prscnf,
const int mode,
struct ln_pdag **pdag,
struct ln_pdag **p_nextnode)
{
int r = LN_BADCONFIG;
struct ln_pdag *dag = *pdag;
struct ln_pdag *nextnode = *p_nextnode;
const int lenarr = json_object_array_length(prscnf);
for(int i = 0 ; i < lenarr ; ++i) {
struct json_object *const curr_prscnf =
json_object_array_get_idx(prscnf, i);
LN_DBGPRINTF(ctx, "parser %d: %s", i, json_object_to_json_string(curr_prscnf));
if(json_object_get_type(curr_prscnf) == json_type_array) {
struct ln_pdag *local_dag = dag;
CHKR(ln_pdagAddParserInternal(ctx, &local_dag, mode,
curr_prscnf, &nextnode));
if(mode == PRS_ADD_MODE_SEQ) {
dag = local_dag;
}
} else {
CHKR(ln_pdagAddParserInstance(ctx, curr_prscnf, dag, &nextnode));
}
if(mode == PRS_ADD_MODE_SEQ) {
dag = nextnode;
*p_nextnode = nextnode;
nextnode = NULL;
}
}
if(mode != PRS_ADD_MODE_SEQ)
dag = nextnode;
*pdag = dag;
r = 0;
done:
return r;
}
/* add a json parser config object. Note that this object may contain
* multiple parser instances. Additionally, moves the pdag object to
* the next node, which is either newly created or previously existed.
*/
static int
ln_pdagAddParserInternal(ln_ctx ctx, struct ln_pdag **pdag,
const int mode, json_object *const prscnf, struct ln_pdag **nextnode)
{
int r = LN_BADCONFIG;
struct ln_pdag *dag = *pdag;
LN_DBGPRINTF(ctx, "ln_pdagAddParserInternal: %s", json_object_to_json_string(prscnf));
if(json_object_get_type(prscnf) == json_type_object) {
/* check for special types we need to handle here */
struct json_object *json;
json_object_object_get_ex(prscnf, "type", &json);
const char *const ftype = json_object_get_string(json);
if(!strcmp(ftype, "alternative")) {
json_object_object_get_ex(prscnf, "parser", &json);
if(json_object_get_type(json) != json_type_array) {
ln_errprintf(ctx, 0, "alternative type needs array of parsers. "
"Object: '%s', type is %s",
json_object_to_json_string(prscnf),
json_type_to_name(json_object_get_type(json)));
goto done;
}
CHKR(ln_pdagAddParsers(ctx, json, PRS_ADD_MODE_ALTERNATIVE, &dag, nextnode));
} else {
CHKR(ln_pdagAddParserInstance(ctx, prscnf, dag, nextnode));
if(mode == PRS_ADD_MODE_SEQ)
dag = *nextnode;
}
} else if(json_object_get_type(prscnf) == json_type_array) {
CHKR(ln_pdagAddParsers(ctx, prscnf, PRS_ADD_MODE_SEQ, &dag, nextnode));
} else {
ln_errprintf(ctx, 0, "bug: prscnf object of wrong type. Object: '%s'",
json_object_to_json_string(prscnf));
goto done;
}
*pdag = dag;
done:
return r;
}
/* add a json parser config object. Note that this object may contain
* multiple parser instances. Additionally, moves the pdag object to
* the next node, which is either newly created or previously existed.
*/
int
ln_pdagAddParser(ln_ctx ctx, struct ln_pdag **pdag, json_object *const prscnf)
{
struct ln_pdag *nextnode = NULL;
int r = ln_pdagAddParserInternal(ctx, pdag, PRS_ADD_MODE_SEQ, prscnf, &nextnode);
json_object_put(prscnf);
return r;
}
void
ln_displayPDAGComponent(struct ln_pdag *dag, int level)
{
char indent[2048];
if(level > 1023)
level = 1023;
memset(indent, ' ', level * 2);
indent[level * 2] = '\0';
LN_DBGPRINTF(dag->ctx, "%ssubDAG%s %p (children: %d parsers, ref %d) [called %u, backtracked %u]",
indent, dag->flags.isTerminal ? " [TERM]" : "", dag, dag->nparsers, dag->refcnt,
dag->stats.called, dag->stats.backtracked);
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *const prs = dag->parsers+i;
LN_DBGPRINTF(dag->ctx, "%sfield type '%s', name '%s': '%s': called %u", indent,
parserName(prs->prsid),
dag->parsers[i].name,
(prs->prsid == PRS_LITERAL) ? ln_DataForDisplayLiteral(dag->ctx, prs->parser_data) : "UNKNOWN",
dag->parsers[i].node->stats.called);
}
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *const prs = dag->parsers+i;
LN_DBGPRINTF(dag->ctx, "%sfield type '%s', name '%s': '%s':", indent,
parserName(prs->prsid),
dag->parsers[i].name,
(prs->prsid == PRS_LITERAL) ? ln_DataForDisplayLiteral(dag->ctx, prs->parser_data) :
"UNKNOWN");
if(prs->prsid == PRS_REPEAT) {
struct data_Repeat *const data = (struct data_Repeat*) prs->parser_data;
LN_DBGPRINTF(dag->ctx, "%sparser:", indent);
ln_displayPDAGComponent(data->parser, level + 1);
LN_DBGPRINTF(dag->ctx, "%swhile:", indent);
ln_displayPDAGComponent(data->while_cond, level + 1);
LN_DBGPRINTF(dag->ctx, "%send repeat def", indent);
}
ln_displayPDAGComponent(dag->parsers[i].node, level + 1);
}
}
void ln_displayPDAGComponentAlternative(struct ln_pdag *dag, int level)
{
char indent[2048];
if(level > 1023)
level = 1023;
memset(indent, ' ', level * 2);
indent[level * 2] = '\0';
LN_DBGPRINTF(dag->ctx, "%s%p[ref %d]: %s", indent, dag, dag->refcnt, dag->rb_id);
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_displayPDAGComponentAlternative(dag->parsers[i].node, level + 1);
}
}
/* developer debug aid, to be used for example as follows:
* LN_DBGPRINTF(dag->ctx, "---------------------------------------");
* ln_displayPDAG(dag);
* LN_DBGPRINTF(dag->ctx, "=======================================");
*/
void
ln_displayPDAG(ln_ctx ctx)
{
ln_pdagClearVisited(ctx);
for(int i = 0 ; i < ctx->nTypes ; ++i) {
LN_DBGPRINTF(ctx, "COMPONENT: %s", ctx->type_pdags[i].name);
ln_displayPDAGComponent(ctx->type_pdags[i].pdag, 0);
}
LN_DBGPRINTF(ctx, "MAIN COMPONENT:");
ln_displayPDAGComponent(ctx->pdag, 0);
LN_DBGPRINTF(ctx, "MAIN COMPONENT (alternative):");
ln_displayPDAGComponentAlternative(ctx->pdag, 0);
}
/* the following is a quick hack, which should be moved to the
* string class.
*/
static inline void dotAddPtr(es_str_t **str, void *p)
{
char buf[64];
int i;
i = snprintf(buf, sizeof(buf), "l%p", p);
es_addBuf(str, buf, i);
}
struct data_Literal { const char *lit; }; // TODO remove when this hack is no longe needed
/**
* recursive handler for DOT graph generator.
*/
static void
ln_genDotPDAGGraphRec(struct ln_pdag *dag, es_str_t **str)
{
char s_refcnt[16];
LN_DBGPRINTF(dag->ctx, "in dot: %p, visited %d", dag, (int) dag->flags.visited);
if(dag->flags.visited)
return; /* already processed this subpart */
dag->flags.visited = 1;
dotAddPtr(str, dag);
snprintf(s_refcnt, sizeof(s_refcnt), "%d", dag->refcnt);
s_refcnt[sizeof(s_refcnt)-1] = '\0';
es_addBufConstcstr(str, " [ label=\"");
es_addBuf(str, s_refcnt, strlen(s_refcnt));
es_addBufConstcstr(str, "\"");
if(isLeaf(dag)) {
es_addBufConstcstr(str, " style=\"bold\"");
}
es_addBufConstcstr(str, "]\n");
/* display field subdags */
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *const prs = dag->parsers+i;
dotAddPtr(str, dag);
es_addBufConstcstr(str, " -> ");
dotAddPtr(str, prs->node);
es_addBufConstcstr(str, " [label=\"");
es_addBuf(str, parserName(prs->prsid), strlen(parserName(prs->prsid)));
es_addBufConstcstr(str, ":");
//es_addStr(str, node->name);
if(prs->prsid == PRS_LITERAL) {
for(const char *p = ((struct data_Literal*)prs->parser_data)->lit ; *p ; ++p) {
// TODO: handle! if(*p == '\\')
//es_addChar(str, '\\');
if(*p != '\\' && *p != '"')
es_addChar(str, *p);
}
}
es_addBufConstcstr(str, "\"");
es_addBufConstcstr(str, " style=\"dotted\"]\n");
ln_genDotPDAGGraphRec(prs->node, str);
}
}
void
ln_genDotPDAGGraph(struct ln_pdag *dag, es_str_t **str)
{
ln_pdagClearVisited(dag->ctx);
es_addBufConstcstr(str, "digraph pdag {\n");
ln_genDotPDAGGraphRec(dag, str);
es_addBufConstcstr(str, "}\n");
}
/**
* recursive handler for statistics DOT graph generator.
*/
static void
ln_genStatsDotPDAGGraphRec(struct ln_pdag *dag, FILE *const __restrict__ fp)
{
if(dag->flags.visited)
return; /* already processed this subpart */
dag->flags.visited = 1;
fprintf(fp, "l%p [ label=\"%u:%u\"", dag,
dag->stats.called, dag->stats.backtracked);
if(isLeaf(dag)) {
fprintf(fp, " style=\"bold\"");
}
fprintf(fp, "]\n");
/* display field subdags */
for(int i = 0 ; i < dag->nparsers ; ++i) {
ln_parser_t *const prs = dag->parsers+i;
if(prs->node->stats.called == 0)
continue;
fprintf(fp, "l%p -> l%p [label=\"", dag, prs->node);
if(prs->prsid == PRS_LITERAL) {
for(const char *p = ((struct data_Literal*)prs->parser_data)->lit ; *p ; ++p) {
if(*p != '\\' && *p != '"')
fputc(*p, fp);
}
} else {
fprintf(fp, "%s", parserName(prs->prsid));
}
fprintf(fp, "\" style=\"dotted\"]\n");
ln_genStatsDotPDAGGraphRec(prs->node, fp);
}
}
static void
ln_genStatsDotPDAGGraph(struct ln_pdag *dag, FILE *const fp)
{
ln_pdagClearVisited(dag->ctx);
fprintf(fp, "digraph pdag {\n");
ln_genStatsDotPDAGGraphRec(dag, fp);
fprintf(fp, "}\n");
}
void
ln_fullPDagStatsDOT(ln_ctx ctx, FILE *const fp)
{
ln_genStatsDotPDAGGraph(ctx->pdag, fp);
}
static inline int
addOriginalMsg(const char *str, const size_t strLen, struct json_object *const json)
{
int r = 1;
struct json_object *value;
value = json_object_new_string_len(str, strLen);
if (value == NULL) {
goto done;
}
json_object_object_add(json, ORIGINAL_MSG_KEY, value);
r = 0;
done:
return r;
}
static char *
strrev(char *const __restrict__ str)
{
char ch;
size_t i = strlen(str)-1,j=0;
while(i>j)
{
ch = str[i];
str[i]= str[j];
str[j] = ch;
i--;
j++;
}
return str;
}
/* note: "originalmsg" is NOT added as metadata in order to keep
* backwards compatible.
*/
static inline void
addRuleMetadata(npb_t *const __restrict__ npb,
struct json_object *const json,
struct ln_pdag *const __restrict__ endNode)
{
ln_ctx ctx = npb->ctx;
struct json_object *meta = NULL;
struct json_object *meta_rule = NULL;
struct json_object *value;
if(ctx->opts & LN_CTXOPT_ADD_RULE) { /* matching rule mockup */
if(meta_rule == NULL)
meta_rule = json_object_new_object();
char *cstr = strrev(es_str2cstr(npb->rule, NULL));
json_object_object_add(meta_rule, RULE_MOCKUP_KEY,
json_object_new_string(cstr));
free(cstr);
}
if(ctx->opts & LN_CTXOPT_ADD_RULE_LOCATION) {
if(meta_rule == NULL)
meta_rule = json_object_new_object();
struct json_object *const location = json_object_new_object();
value = json_object_new_string(endNode->rb_file);
json_object_object_add(location, "file", value);
value = json_object_new_int((int)endNode->rb_lineno);
json_object_object_add(location, "line", value);
json_object_object_add(meta_rule, RULE_LOCATION_KEY, location);
}
if(meta_rule != NULL) {
if(meta == NULL)
meta = json_object_new_object();
json_object_object_add(meta, META_RULE_KEY, meta_rule);
}
#ifdef ADVANCED_STATS
/* complete execution path */
if(ctx->opts & LN_CTXOPT_ADD_EXEC_PATH) {
if(meta == NULL)
meta = json_object_new_object();
char hdr[128];
const size_t lenhdr
= snprintf(hdr, sizeof(hdr), "[PATHLEN:%d, PARSER CALLS gen:%d, literal:%d]",
npb->astats.pathlen, npb->astats.parser_calls,
npb->astats.lit_parser_calls);
es_addBuf(&npb->astats.exec_path, hdr, lenhdr);
char * cstr = es_str2cstr(npb->astats.exec_path, NULL);
value = json_object_new_string(cstr);
if (value != NULL) {
json_object_object_add(meta, EXEC_PATH_KEY, value);
}
free(cstr);
}
#endif
if(meta != NULL)
json_object_object_add(json, META_KEY, meta);
}
/**
* add unparsed string to event.
*/
static inline int
addUnparsedField(const char *str, const size_t strLen, const size_t offs, struct json_object *json)
{
int r = 1;
struct json_object *value;
CHKR(addOriginalMsg(str, strLen, json));
value = json_object_new_string(str + offs);
if (value == NULL) {
goto done;
}
json_object_object_add(json, UNPARSED_DATA_KEY, value);
r = 0;
done:
return r;
}
/* Do some fixup to the json that we cannot do on a lower layer */
static int
fixJSON(struct ln_pdag *dag,
struct json_object **value,
struct json_object *json,
const ln_parser_t *const prs)
{
int r = LN_WRONGPARSER;
if(prs->name == NULL) {
if (*value != NULL) {
/* Free the unneeded value */
json_object_put(*value);
}
} else if(prs->name[0] == '.' && prs->name[1] == '\0') {
if(json_object_get_type(*value) == json_type_object) {
struct json_object_iterator it = json_object_iter_begin(*value);
struct json_object_iterator itEnd = json_object_iter_end(*value);
while (!json_object_iter_equal(&it, &itEnd)) {
struct json_object *const val = json_object_iter_peek_value(&it);
json_object_get(val);
json_object_object_add(json, json_object_iter_peek_name(&it), val);
json_object_iter_next(&it);
}
json_object_put(*value);
} else {
LN_DBGPRINTF(dag->ctx, "field name is '.', but json type is %s",
json_type_to_name(json_object_get_type(*value)));
json_object_object_add_ex(json, prs->name, *value,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
} else {
int isDotDot = 0;
struct json_object *valDotDot = NULL;
if(json_object_get_type(*value) == json_type_object) {
/* TODO: this needs to be speeded up by just checking the first
* member and ensuring there is only one member. This requires
* extensions to libfastjson.
*/
int nSubobj = 0;
struct json_object_iterator it = json_object_iter_begin(*value);
struct json_object_iterator itEnd = json_object_iter_end(*value);
while (!json_object_iter_equal(&it, &itEnd)) {
++nSubobj;
const char *key = json_object_iter_peek_name(&it);
if(key[0] == '.' && key[1] == '.' && key[2] == '\0') {
isDotDot = 1;
valDotDot = json_object_iter_peek_value(&it);
} else {
isDotDot = 0;
}
json_object_iter_next(&it);
}
if(nSubobj != 1)
isDotDot = 0;
}
if(isDotDot) {
LN_DBGPRINTF(dag->ctx, "subordinate field name is '..', combining");
json_object_get(valDotDot);
json_object_put(*value);
json_object_object_add_ex(json, prs->name, valDotDot,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
} else {
json_object_object_add_ex(json, prs->name, *value,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
}
r = 0;
return r;
}
// TODO: streamline prototype when done with changes
static int
tryParser(npb_t *const __restrict__ npb,
struct ln_pdag *dag,
size_t *offs,
size_t *const __restrict__ pParsed,
struct json_object **value,
const ln_parser_t *const prs
)
{
int r;
struct ln_pdag *endNode = NULL;
size_t parsedTo = npb->parsedTo;
# ifdef ADVANCED_STATS
char hdr[16];
const size_t lenhdr
= snprintf(hdr, sizeof(hdr), "%d:", npb->astats.recursion_level);
es_addBuf(&npb->astats.exec_path, hdr, lenhdr);
if(prs->prsid == PRS_LITERAL) {
es_addChar(&npb->astats.exec_path, '\'');
es_addBuf(&npb->astats.exec_path,
ln_DataForDisplayLiteral(dag->ctx,
prs->parser_data),
strlen(ln_DataForDisplayLiteral(dag->ctx,
prs->parser_data))
);
es_addChar(&npb->astats.exec_path, '\'');
} else if(parser_lookup_table[prs->prsid].parser
== ln_v2_parseCharTo) {
es_addBuf(&npb->astats.exec_path,
ln_DataForDisplayCharTo(dag->ctx,
prs->parser_data),
strlen(ln_DataForDisplayCharTo(dag->ctx,
prs->parser_data))
);
} else {
es_addBuf(&npb->astats.exec_path,
parserName(prs->prsid),
strlen(parserName(prs->prsid)) );
}
es_addChar(&npb->astats.exec_path, ',');
# endif
if(prs->prsid == PRS_CUSTOM_TYPE) {
if(*value == NULL)
*value = json_object_new_object();
LN_DBGPRINTF(dag->ctx, "calling custom parser '%s'", dag->ctx->type_pdags[prs->custTypeIdx].name);
r = ln_normalizeRec(npb, dag->ctx->type_pdags[prs->custTypeIdx].pdag, *offs, 1, *value, &endNode);
LN_DBGPRINTF(dag->ctx, "called CUSTOM PARSER '%s', result %d, "
"offs %zd, *pParsed %zd", dag->ctx->type_pdags[prs->custTypeIdx].name, r, *offs, *pParsed);
*pParsed = npb->parsedTo - *offs;
#ifdef ADVANCED_STATS
es_addBuf(&npb->astats.exec_path, hdr, lenhdr);
es_addBuf(&npb->astats.exec_path, "[R:USR],", 8);
#endif
} else {
r = parser_lookup_table[prs->prsid].parser(npb,
offs, prs->parser_data, pParsed, (prs->name == NULL) ? NULL : value);
}
LN_DBGPRINTF(npb->ctx, "parser lookup returns %d, pParsed %zu", r, *pParsed);
npb->parsedTo = parsedTo;
#ifdef ADVANCED_STATS
++advstats_parsers_called;
++npb->astats.parser_calls;
if(prs->prsid == PRS_LITERAL)
++npb->astats.lit_parser_calls;
if(r == 0)
++advstats_parsers_success;
if(prs->prsid != PRS_CUSTOM_TYPE) {
++parser_lookup_table[prs->prsid].called;
if(r == 0)
++parser_lookup_table[prs->prsid].success;
}
#endif
return r;
}
static void
add_str_reversed(npb_t *const __restrict__ npb,
const char *const __restrict__ str,
const size_t len)
{
ssize_t i;
for(i = len - 1 ; i >= 0 ; --i) {
es_addChar(&npb->rule, str[i]);
}
}
/* Add the current parser to the mockup rule.
* Note: we add reversed strings, because we can call this
* function effectively only when walking upwards the tree.
* This means deepest entries come first. We solve this somewhat
* elegantly by reversion strings, and then reversion the string
* once more when we emit it, so that we get the right order.
*/
static inline void
add_rule_to_mockup(npb_t *const __restrict__ npb,
const ln_parser_t *const __restrict__ prs)
{
if(prs->prsid == PRS_LITERAL) {
const char *const val =
ln_DataForDisplayLiteral(npb->ctx,
prs->parser_data);
add_str_reversed(npb, val, strlen(val));
} else {
/* note: name/value order must also be reversed! */
es_addChar(&npb->rule, '%');
add_str_reversed(npb,
parserName(prs->prsid),
strlen(parserName(prs->prsid)) );
es_addChar(&npb->rule, ':');
if(prs->name == NULL) {
es_addChar(&npb->rule, '-');
} else {
add_str_reversed(npb, prs->name, strlen(prs->name));
}
es_addChar(&npb->rule, '%');
}
}
/**
* Recursive step of the normalizer. It walks the parse dag and calls itself
* recursively when this is appropriate. It also implements backtracking in
* those (hopefully rare) cases where it is required.
*
* @param[in] dag current tree to process
* @param[in] string string to be matched against (the to-be-normalized data)
* @param[in] strLen length of the to-be-matched string
* @param[in] offs start position in input data
* @param[out] pPrasedTo ptr to position up to which the the parsing succed in max
* @param[in/out] json ... that is being created during normalization
* @param[out] endNode if a match was found, this is the matching node (undefined otherwise)
*
* @return regular liblognorm error code (0->OK, something else->error)
* TODO: can we use parameter block to prevent pushing params to the stack?
*/
int
ln_normalizeRec(npb_t *const __restrict__ npb,
struct ln_pdag *dag,
const size_t offs,
const int bPartialMatch,
struct json_object *json,
struct ln_pdag **endNode
)
{
int r = LN_WRONGPARSER;
int localR;
size_t i;
size_t iprs;
size_t parsedTo = npb->parsedTo;
size_t parsed = 0;
struct json_object *value;
LN_DBGPRINTF(dag->ctx, "%zu: enter parser, dag node %p, json %p", offs, dag, json);
++dag->stats.called;
#ifdef ADVANCED_STATS
++npb->astats.pathlen;
++npb->astats.recursion_level;
#endif
/* now try the parsers */
for(iprs = 0 ; iprs < dag->nparsers && r != 0 ; ++iprs) {
const ln_parser_t *const prs = dag->parsers + iprs;
if(dag->ctx->debug) {
LN_DBGPRINTF(dag->ctx, "%zu/%d:trying '%s' parser for field '%s', "
"data '%s'",
offs, bPartialMatch, parserName(prs->prsid), prs->name,
(prs->prsid == PRS_LITERAL)
? ln_DataForDisplayLiteral(dag->ctx, prs->parser_data)
: "UNKNOWN");
}
i = offs;
value = NULL;
localR = tryParser(npb, dag, &i, &parsed, &value, prs);
if(localR == 0) {
parsedTo = i + parsed;
/* potential hit, need to verify */
LN_DBGPRINTF(dag->ctx, "%zu: potential hit, trying subtree %p",
offs, prs->node);
r = ln_normalizeRec(npb, prs->node, parsedTo,
bPartialMatch, json, endNode);
LN_DBGPRINTF(dag->ctx, "%zu: subtree returns %d, parsedTo %zu", offs, r, parsedTo);
if(r == 0) {
LN_DBGPRINTF(dag->ctx, "%zu: parser matches at %zu", offs, i);
CHKR(fixJSON(dag, &value, json, prs));
if(npb->ctx->opts & LN_CTXOPT_ADD_RULE) {
add_rule_to_mockup(npb, prs);
}
} else {
++dag->stats.backtracked;
#ifdef ADVANCED_STATS
++npb->astats.backtracked;
es_addBuf(&npb->astats.exec_path, "[B]", 3);
#endif
LN_DBGPRINTF(dag->ctx, "%zu nonmatch, backtracking required, parsed to=%zu",
offs, parsedTo);
if (value != NULL) { /* Free the value if it was created */
json_object_put(value);
}
}
}
/* did we have a longer parser --> then update */
if(parsedTo > npb->parsedTo)
npb->parsedTo = parsedTo;
LN_DBGPRINTF(dag->ctx, "parsedTo %zu, *pParsedTo %zu", parsedTo, npb->parsedTo);
}
LN_DBGPRINTF(dag->ctx, "offs %zu, strLen %zu, isTerm %d", offs, npb->strLen, dag->flags.isTerminal);
if(dag->flags.isTerminal && (offs == npb->strLen || bPartialMatch)) {
*endNode = dag;
r = 0;
goto done;
}
done:
LN_DBGPRINTF(dag->ctx, "%zu returns %d, pParsedTo %zu, parsedTo %zu",
offs, r, npb->parsedTo, parsedTo);
# ifdef ADVANCED_STATS
--npb->astats.recursion_level;
# endif
return r;
}
int
ln_normalize(ln_ctx ctx, const char *str, const size_t strLen, struct json_object **json_p)
{
int r;
struct ln_pdag *endNode = NULL;
/* old cruft */
if(ctx->version == 1) {
r = ln_v1_normalize(ctx, str, strLen, json_p);
goto done;
}
/* end old cruft */
npb_t npb;
memset(&npb, 0, sizeof(npb));
npb.ctx = ctx;
npb.str = str;
npb.strLen = strLen;
if(ctx->opts & LN_CTXOPT_ADD_RULE) {
npb.rule = es_newStr(1024);
}
# ifdef ADVANCED_STATS
npb.astats.exec_path = es_newStr(1024);
# endif
if(*json_p == NULL) {
CHKN(*json_p = json_object_new_object());
}
r = ln_normalizeRec(&npb, ctx->pdag, 0, 0, *json_p, &endNode);
if(ctx->debug) {
if(r == 0) {
LN_DBGPRINTF(ctx, "final result for normalizer: parsedTo %zu, endNode %p, "
"isTerminal %d, tagbucket %p",
npb.parsedTo, endNode, endNode->flags.isTerminal, endNode->tags);
} else {
LN_DBGPRINTF(ctx, "final result for normalizer: parsedTo %zu, endNode %p",
npb.parsedTo, endNode);
}
}
LN_DBGPRINTF(ctx, "DONE, final return is %d", r);
if(r == 0 && endNode->flags.isTerminal) {
/* success, finalize event */
if(endNode->tags != NULL) {
/* add tags to an event */
json_object_get(endNode->tags);
json_object_object_add(*json_p, "event.tags", endNode->tags);
CHKR(ln_annotate(ctx, *json_p, endNode->tags));
}
if(ctx->opts & LN_CTXOPT_ADD_ORIGINALMSG) {
/* originalmsg must be kept outside of metadata for
* backward compatibility reasons.
*/
json_object_object_add(*json_p, ORIGINAL_MSG_KEY,
json_object_new_string_len(str, strLen));
}
addRuleMetadata(&npb, *json_p, endNode);
r = 0;
} else {
addUnparsedField(str, strLen, npb.parsedTo, *json_p);
}
if(ctx->opts & LN_CTXOPT_ADD_RULE) {
es_deleteStr(npb.rule);
}
#ifdef ADVANCED_STATS
if(r != 0)
es_addBuf(&npb.astats.exec_path, "[FAILED]", 8);
else if(!endNode->flags.isTerminal)
es_addBuf(&npb.astats.exec_path, "[FAILED:NON-TERMINAL]", 21);
if(npb.astats.pathlen < ADVSTATS_MAX_ENTITIES)
advstats_pathlens[npb.astats.pathlen]++;
if(npb.astats.pathlen > advstats_max_pathlen) {
advstats_max_pathlen = npb.astats.pathlen;
}
if(npb.astats.backtracked < ADVSTATS_MAX_ENTITIES)
advstats_backtracks[npb.astats.backtracked]++;
if(npb.astats.backtracked > advstats_max_backtracked) {
advstats_max_backtracked = npb.astats.backtracked;
}
/* parser calls */
if(npb.astats.parser_calls < ADVSTATS_MAX_ENTITIES)
advstats_parser_calls[npb.astats.parser_calls]++;
if(npb.astats.parser_calls > advstats_max_parser_calls) {
advstats_max_parser_calls = npb.astats.parser_calls;
}
if(npb.astats.lit_parser_calls < ADVSTATS_MAX_ENTITIES)
advstats_lit_parser_calls[npb.astats.lit_parser_calls]++;
if(npb.astats.lit_parser_calls > advstats_max_lit_parser_calls) {
advstats_max_lit_parser_calls = npb.astats.lit_parser_calls;
}
es_deleteStr(npb.astats.exec_path);
#endif
done: return r;
}
liblognorm-2.0.6/src/samp.c 0000644 0001750 0001750 00000074166 13370246323 012520 0000000 0000000 /* samp.c -- code for ln_samp objects.
* This code handles rulebase processing. Rulebases have been called
* "sample bases" in the early days of liblognorm, thus the name.
*
* Copyright 2010-2018 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include
#include "liblognorm.h"
#include "lognorm.h"
#include "samp.h"
#include "internal.h"
#include "parser.h"
#include "pdag.h"
#include "v1_liblognorm.h"
#include "v1_ptree.h"
void
ln_sampFree(ln_ctx __attribute__((unused)) ctx, struct ln_samp *samp)
{
free(samp);
}
static int
ln_parseLegacyFieldDescr(ln_ctx ctx,
const char *const buf,
const size_t lenBuf,
size_t *bufOffs,
es_str_t **str,
json_object **prscnf)
{
int r = 0;
char *cstr; /* for debug mode strings */
char *ftype = NULL;
char name[MAX_FIELDNAME_LEN];
size_t iDst;
struct json_object *json = NULL;
char *ed = NULL;
es_size_t i = *bufOffs;
es_str_t *edata = NULL;
for( iDst = 0
; iDst < (MAX_FIELDNAME_LEN - 1) && i < lenBuf && buf[i] != ':'
; ++iDst) {
name[iDst] = buf[i++];
}
name[iDst] = '\0';
if(iDst == (MAX_FIELDNAME_LEN - 1)) {
ln_errprintf(ctx, 0, "field name too long in: %s", buf+(*bufOffs));
FAIL(LN_INVLDFDESCR);
}
if(i == lenBuf) {
ln_errprintf(ctx, 0, "field definition wrong in: %s", buf+(*bufOffs));
FAIL(LN_INVLDFDESCR);
}
if(iDst == 0) {
FAIL(LN_INVLDFDESCR);
}
if(ctx->debug) {
ln_dbgprintf(ctx, "parsed field: '%s'", name);
}
if(buf[i] != ':') {
ln_errprintf(ctx, 0, "missing colon in: %s", buf+(*bufOffs));
FAIL(LN_INVLDFDESCR);
}
++i; /* skip ':' */
/* parse and process type (trailing whitespace must be trimmed) */
es_emptyStr(*str);
size_t j = i;
/* scan for terminator */
while(j < lenBuf && buf[j] != ':' && buf[j] != '{' && buf[j] != '%')
++j;
/* now trim trailing space backwards */
size_t next = j;
--j;
while(j >= i && isspace(buf[j]))
--j;
/* now copy */
while(i <= j) {
CHKR(es_addChar(str, buf[i++]));
}
/* finally move i to consumed position */
i = next;
if(i == lenBuf) {
ln_errprintf(ctx, 0, "premature end (missing %%?) in: %s", buf+(*bufOffs));
FAIL(LN_INVLDFDESCR);
}
ftype = es_str2cstr(*str, NULL);
ln_dbgprintf(ctx, "field type '%s', i %d", ftype, i);
if(buf[i] == '{') {
struct json_tokener *tokener = json_tokener_new();
json = json_tokener_parse_ex(tokener, buf+i, (int) (lenBuf - i));
if(json == NULL) {
ln_errprintf(ctx, 0, "invalid json in '%s'", buf+i);
}
i += tokener->char_offset;
json_tokener_free(tokener);
}
if(buf[i] == '%') {
i++;
} else {
/* parse extra data */
CHKN(edata = es_newStr(8));
i++;
while(i < lenBuf) {
if(buf[i] == '%') {
++i;
break; /* end of field */
}
CHKR(es_addChar(&edata, buf[i++]));
}
es_unescapeStr(edata);
if(ctx->debug) {
cstr = es_str2cstr(edata, NULL);
ln_dbgprintf(ctx, "parsed extra data: '%s'", cstr);
free(cstr);
}
}
struct json_object *val;
*prscnf = json_object_new_object();
CHKN(val = json_object_new_string(name));
json_object_object_add(*prscnf, "name", val);
CHKN(val = json_object_new_string(ftype));
json_object_object_add(*prscnf, "type", val);
if(edata != NULL) {
ed = es_str2cstr(edata, " ");
CHKN(val = json_object_new_string(ed));
json_object_object_add(*prscnf, "extradata", val);
}
if(json != NULL) {
/* now we need to merge the json params into the main object */
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
struct json_object *const v = json_object_iter_peek_value(&it);
json_object_get(v);
json_object_object_add(*prscnf, json_object_iter_peek_name(&it), v);
json_object_iter_next(&it);
}
}
*bufOffs = i;
done:
free(ed);
if(edata != NULL)
es_deleteStr(edata);
free(ftype);
if(json != NULL)
json_object_put(json);
return r;
}
/**
* Extract a field description from a sample.
* The field description is added to the tail of the current
* subtree's field list. The parse buffer must be position on the
* leading '%' that starts a field definition. It is a program error
* if this condition is not met.
*
* Note that we break up the object model and access ptree members
* directly. Let's consider us a friend of ptree. This is necessary
* to optimize the structure for a high-speed parsing process.
*
* @param[in] str a temporary work string. This is passed in to save the
* creation overhead
* @returns 0 on success, something else otherwise
*/
static int
addFieldDescr(ln_ctx ctx, struct ln_pdag **pdag, es_str_t *rule,
size_t *bufOffs, es_str_t **str)
{
int r = 0;
es_size_t i = *bufOffs;
char *ftype = NULL;
const char *buf;
es_size_t lenBuf;
struct json_object *prs_config = NULL;
buf = (const char*)es_getBufAddr(rule);
lenBuf = es_strlen(rule);
assert(buf[i] == '%');
++i; /* "eat" ':' */
/* skip leading whitespace in field name */
while(i < lenBuf && isspace(buf[i]))
++i;
/* check if we have new-style json config */
if(buf[i] == '{' || buf[i] == '[') {
struct json_tokener *tokener = json_tokener_new();
prs_config = json_tokener_parse_ex(tokener, buf+i, (int) (lenBuf - i));
i += tokener->char_offset;
json_tokener_free(tokener);
if(prs_config == NULL || i == lenBuf || buf[i] != '%') {
ln_errprintf(ctx, 0, "invalid json in '%s'", buf+i);
r = -1;
goto done;
}
*bufOffs = i+1; /* eat '%' - if above ensures it is present */
} else {
*bufOffs = i;
CHKR(ln_parseLegacyFieldDescr(ctx, buf, lenBuf, bufOffs, str, &prs_config));
}
CHKR(ln_pdagAddParser(ctx, pdag, prs_config));
done:
free(ftype);
return r;
}
/**
* Construct a literal parser json definition.
*/
static json_object *
newLiteralParserJSONConf(char lit)
{
char buf[] = "x";
buf[0] = lit;
struct json_object *val;
struct json_object *prscnf = json_object_new_object();
val = json_object_new_string("literal");
json_object_object_add(prscnf, "type", val);
val = json_object_new_string(buf);
json_object_object_add(prscnf, "text", val);
return prscnf;
}
/**
* Parse a Literal string out of the template and add it to the tree.
* This function is used to create the unoptimized tree. So we do
* one node for each character. These will be compacted by the optimizer
* in a later stage. The advantage is that we do not need to care about
* splitting the tree. As such the processing is fairly simple:
*
* for each character in literal (left-to-right):
* create literal parser object o
* add new DAG node o, advance to it
*
* @param[in] ctx the context
* @param[in/out] subtree on entry, current subtree, on exist newest
* deepest subtree
* @param[in] rule string with current rule
* @param[in/out] bufOffs parse pointer, up to which offset is parsed
* (is updated so that it points to first char after consumed
* string on exit).
* @param str a work buffer, provided to prevent creation of a new object
* @return 0 on success, something else otherwise
*/
static int
parseLiteral(ln_ctx ctx, struct ln_pdag **pdag, es_str_t *rule,
size_t *const __restrict__ bufOffs, es_str_t **str)
{
int r = 0;
size_t i = *bufOffs;
unsigned char *buf = es_getBufAddr(rule);
const size_t lenBuf = es_strlen(rule);
const char *cstr = NULL;
es_emptyStr(*str);
while(i < lenBuf) {
if(buf[i] == '%') {
if(i+1 < lenBuf && buf[i+1] != '%') {
break; /* field start is end of literal */
}
if (++i == lenBuf) break;
}
CHKR(es_addChar(str, buf[i]));
++i;
}
es_unescapeStr(*str);
cstr = es_str2cstr(*str, NULL);
if(ctx->debug) {
ln_dbgprintf(ctx, "parsed literal: '%s'", cstr);
}
*bufOffs = i;
/* we now add the string to the tree */
for(i = 0 ; cstr[i] != '\0' ; ++i) {
struct json_object *const prscnf =
newLiteralParserJSONConf(cstr[i]);
CHKN(prscnf);
CHKR(ln_pdagAddParser(ctx, pdag, prscnf));
}
r = 0;
done:
free((void*)cstr);
return r;
}
/* Implementation note:
* We read in the sample, and split it into chunks of literal text and
* fields. Each literal text is added as whole to the tree, as is each
* field individually. To do so, we keep track of our current subtree
* root, which changes whenever a new part of the tree is build. It is
* set to the then-lowest part of the tree, where the next step sample
* data is to be added.
*
* This function processes the whole string or returns an error.
*
* format: literal1%field:type:extra-data%literal2
*
* @returns the new dag root (or NULL in case of error)
*/
static int
addSampToTree(ln_ctx ctx,
es_str_t *rule,
ln_pdag *dag,
struct json_object *tagBucket)
{
int r = -1;
es_str_t *str = NULL;
size_t i;
CHKN(str = es_newStr(256));
i = 0;
while(i < es_strlen(rule)) {
LN_DBGPRINTF(ctx, "addSampToTree %zu of %d", i, es_strlen(rule));
CHKR(parseLiteral(ctx, &dag, rule, &i, &str));
/* After the literal there can be field only*/
if (i < es_strlen(rule)) {
CHKR(addFieldDescr(ctx, &dag, rule, &i, &str));
if (i == es_strlen(rule)) {
/* finish the tree with empty literal to avoid false merging*/
CHKR(parseLiteral(ctx, &dag, rule, &i, &str));
}
}
}
LN_DBGPRINTF(ctx, "end addSampToTree %zu of %d", i, es_strlen(rule));
/* we are at the end of rule processing, so this node is a terminal */
dag->flags.isTerminal = 1;
dag->tags = tagBucket;
dag->rb_file = strdup(ctx->conf_file);
dag->rb_lineno = ctx->conf_ln_nbr;
done:
if(str != NULL)
es_deleteStr(str);
return r;
}
/**
* get the initial word of a rule line that tells us the type of the
* line.
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[out] offs offset after "="
* @param[out] str string with "linetype-word" (newly created)
* @returns 0 on success, something else otherwise
*/
static int
getLineType(const char *buf, es_size_t lenBuf, size_t *offs, es_str_t **str)
{
int r = -1;
size_t i;
*str = es_newStr(16);
for(i = 0 ; i < lenBuf && buf[i] != '=' ; ++i) {
CHKR(es_addChar(str, buf[i]));
}
if(i < lenBuf)
++i; /* skip over '=' */
*offs = i;
done: return r;
}
/**
* Get a new common prefix from the config file. That is actually everything from
* the current offset to the end of line.
*
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset after "="
* @param[in/out] str string to store common offset. If NULL, it is created,
* otherwise it is emptied.
* @returns 0 on success, something else otherwise
*/
static int
getPrefix(const char *buf, es_size_t lenBuf, es_size_t offs, es_str_t **str)
{
int r;
if(*str == NULL) {
CHKN(*str = es_newStr(lenBuf - offs));
} else {
es_emptyStr(*str);
}
r = es_addBuf(str, (char*)buf + offs, lenBuf - offs);
done: return r;
}
/**
* Extend the common prefix. This means that the line is concatenated
* to the prefix. This is useful if the same rulebase is to be used with
* different prefixes (well, not strictly necessary, but probably useful).
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset to-be-added text starts
* @returns 0 on success, something else otherwise
*/
static int
extendPrefix(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t offs)
{
return es_addBuf(&ctx->rulePrefix, (char*)buf+offs, lenBuf - offs);
}
/**
* Add a tag to the tag bucket. Helper to processTags.
* @param[in] ctx current context
* @param[in] tagname string with tag name
* @param[out] tagBucket tagbucket to which new tags shall be added
* the tagbucket is created if it is NULL
* @returns 0 on success, something else otherwise
*/
static int
addTagStrToBucket(ln_ctx ctx, es_str_t *tagname, struct json_object **tagBucket)
{
int r = -1;
char *cstr;
struct json_object *tag;
if(*tagBucket == NULL) {
CHKN(*tagBucket = json_object_new_array());
}
cstr = es_str2cstr(tagname, NULL);
ln_dbgprintf(ctx, "tag found: '%s'", cstr);
CHKN(tag = json_object_new_string(cstr));
json_object_array_add(*tagBucket, tag);
free(cstr);
r = 0;
done: return r;
}
/**
* Extract the tags and create a tag bucket out of them
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in,out] poffs offset where tags start, on exit and success
* offset after tag part (excluding ':')
* @param[out] tagBucket tagbucket to which new tags shall be added
* the tagbucket is created if it is NULL
* @returns 0 on success, something else otherwise
*/
static int
processTags(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t *poffs, struct json_object **tagBucket)
{
int r = -1;
es_str_t *str = NULL;
es_size_t i;
assert(poffs != NULL);
i = *poffs;
while(i < lenBuf && buf[i] != ':') {
if(buf[i] == ',') {
/* end of this tag */
CHKR(addTagStrToBucket(ctx, str, tagBucket));
es_deleteStr(str);
str = NULL;
} else {
if(str == NULL) {
CHKN(str = es_newStr(32));
}
CHKR(es_addChar(&str, buf[i]));
}
++i;
}
if(buf[i] != ':')
goto done;
++i; /* skip ':' */
if(str != NULL) {
CHKR(addTagStrToBucket(ctx, str, tagBucket));
es_deleteStr(str);
}
*poffs = i;
r = 0;
done: return r;
}
/**
* Process a new rule and add it to pdag.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset where rule starts
* @returns 0 on success, something else otherwise
*/
static int
processRule(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t offs)
{
int r = -1;
es_str_t *str;
struct json_object *tagBucket = NULL;
ln_dbgprintf(ctx, "rule line to add: '%s'", buf+offs);
CHKR(processTags(ctx, buf, lenBuf, &offs, &tagBucket));
if(offs == lenBuf) {
ln_errprintf(ctx, 0, "error: actual message sample part is missing");
goto done;
}
if(ctx->rulePrefix == NULL) {
CHKN(str = es_newStr(lenBuf));
} else {
CHKN(str = es_strdup(ctx->rulePrefix));
}
CHKR(es_addBuf(&str, (char*)buf + offs, lenBuf - offs));
addSampToTree(ctx, str, ctx->pdag, tagBucket);
es_deleteStr(str);
r = 0;
done: return r;
}
static int
getTypeName(ln_ctx ctx,
const char *const __restrict__ buf,
const size_t lenBuf,
size_t *const __restrict__ offs,
char *const __restrict__ dstbuf)
{
int r = -1;
size_t iDst;
size_t i = *offs;
if(buf[i] != '@') {
ln_errprintf(ctx, 0, "user-defined type name must "
"start with '@'");
goto done;
}
for( iDst = 0
; i < lenBuf && buf[i] != ':' && iDst < MAX_TYPENAME_LEN - 1
; ++i, ++iDst) {
if(isspace(buf[i])) {
ln_errprintf(ctx, 0, "user-defined type name must "
"not contain whitespace");
goto done;
}
dstbuf[iDst] = buf[i];
}
dstbuf[iDst] = '\0';
if(i < lenBuf && buf[i] == ':') {
r = 0,
*offs = i+1; /* skip ":" */
}
done:
return r;
}
/**
* Process a type definition and add it to the PDAG
* disconnected components.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset where rule starts
* @returns 0 on success, something else otherwise
*/
static int
processType(ln_ctx ctx,
const char *const __restrict__ buf,
const size_t lenBuf,
size_t offs)
{
int r = -1;
es_str_t *str;
char typename[MAX_TYPENAME_LEN];
ln_dbgprintf(ctx, "type line to add: '%s'", buf+offs);
CHKR(getTypeName(ctx, buf, lenBuf, &offs, typename));
ln_dbgprintf(ctx, "type name is '%s'", typename);
ln_dbgprintf(ctx, "type line to add: '%s'", buf+offs);
if(offs == lenBuf) {
ln_errprintf(ctx, 0, "error: actual message sample part is missing in type def");
goto done;
}
// TODO: optimize
CHKN(str = es_newStr(lenBuf));
CHKR(es_addBuf(&str, (char*)buf + offs, lenBuf - offs));
struct ln_type_pdag *const td = ln_pdagFindType(ctx, typename, 1);
CHKN(td);
addSampToTree(ctx, str, td->pdag, NULL);
es_deleteStr(str);
r = 0;
done: return r;
}
/**
* Obtain a field name from a rule base line.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in/out] offs on entry: offset where tag starts,
* on exit: updated offset AFTER TAG and (':')
* @param [out] strTag obtained tag, if successful
* @returns 0 on success, something else otherwise
*/
static int
getFieldName(ln_ctx __attribute__((unused)) ctx, const char *buf, es_size_t lenBuf, es_size_t *offs,
es_str_t **strTag)
{
int r = -1;
es_size_t i;
i = *offs;
while(i < lenBuf &&
(isalnum(buf[i]) || buf[i] == '_' || buf[i] == '.')) {
if(*strTag == NULL) {
CHKN(*strTag = es_newStr(32));
}
CHKR(es_addChar(strTag, buf[i]));
++i;
}
*offs = i;
r = 0;
done: return r;
}
/**
* Skip over whitespace.
* Skips any whitespace present at the offset.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in/out] offs on entry: offset first unprocessed position
*/
static void
skipWhitespace(ln_ctx __attribute__((unused)) ctx, const char *buf, es_size_t lenBuf, es_size_t *offs)
{
while(*offs < lenBuf && isspace(buf[*offs])) {
(*offs)++;
}
}
/**
* Obtain an annotation (field) operation.
* This usually is a plus or minus sign followed by a field name
* followed (if plus) by an equal sign and the field value. On entry,
* offs must be positioned on the first unprocessed field (after ':' for
* the initial field!). Extra whitespace is detected and, if present,
* skipped. The obtained operation is added to the annotation set provided.
* Note that extracted string objects are passed to the annotation; thus it
* is vital NOT to free them (most importantly, this is *not* a memory leak).
*
* @param[in] ctx current context
* @param[in] annot active annotation set to which the operation is to be added
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in/out] offs on entry: offset where tag starts,
* on exit: updated offset AFTER TAG and (':')
* @param [out] strTag obtained tag, if successful
* @returns 0 on success, something else otherwise
*/
static int
getAnnotationOp(ln_ctx ctx, ln_annot *annot, const char *buf, es_size_t lenBuf, es_size_t *offs)
{
int r = -1;
es_size_t i;
es_str_t *fieldName = NULL;
es_str_t *fieldVal = NULL;
ln_annot_opcode opc;
i = *offs;
skipWhitespace(ctx, buf, lenBuf, &i);
if(i == lenBuf) {
r = 0;
goto done; /* nothing left to process (no error!) */
}
switch(buf[i]) {
case '+':
opc = ln_annot_ADD;
break;
case '#':
ln_dbgprintf(ctx, "inline comment in 'annotate' line: %s", buf);
*offs = lenBuf;
r = 0;
goto done;
case '-':
ln_dbgprintf(ctx, "annotate op '-' not yet implemented - failing");
/*FALLTHROUGH*/
default:ln_errprintf(ctx, 0, "invalid annotate operation '%c': %s", buf[i], buf+i);
goto fail;
}
i++;
if(i == lenBuf) goto fail; /* nothing left to process */
CHKR(getFieldName(ctx, buf, lenBuf, &i, &fieldName));
if(i == lenBuf) goto fail; /* nothing left to process */
if(buf[i] != '=') goto fail; /* format error */
i++;
skipWhitespace(ctx, buf, lenBuf, &i);
if(buf[i] != '"') goto fail; /* format error */
++i;
while(i < lenBuf && buf[i] != '"') {
if(fieldVal == NULL) {
CHKN(fieldVal = es_newStr(32));
}
CHKR(es_addChar(&fieldVal, buf[i]));
++i;
}
*offs = (i == lenBuf) ? i : i+1;
CHKR(ln_addAnnotOp(annot, opc, fieldName, fieldVal));
r = 0;
done: return r;
fail: return -1;
}
/**
* Process a new annotation and add it to the annotation set.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset where annotation starts
* @returns 0 on success, something else otherwise
*/
static int
processAnnotate(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t offs)
{
int r;
es_str_t *tag = NULL;
ln_annot *annot;
ln_dbgprintf(ctx, "sample annotation to add: '%s'", buf+offs);
CHKR(getFieldName(ctx, buf, lenBuf, &offs, &tag));
skipWhitespace(ctx, buf, lenBuf, &offs);
if(buf[offs] != ':' || tag == NULL) {
ln_dbgprintf(ctx, "invalid tag field in annotation, line is '%s'", buf);
r=-1;
goto done;
}
++offs;
/* we got an annotation! */
CHKN(annot = ln_newAnnot(tag));
while(offs < lenBuf) {
CHKR(getAnnotationOp(ctx, annot, buf, lenBuf, &offs));
}
r = ln_addAnnotToSet(ctx->pas, annot);
done: return r;
}
/**
* Process include directive. This permits to add unlimited layers
* of include files.
*
* @param[in] ctx current context
* @param[in] buf line buffer, a C-string
* @param[in] offs offset where annotation starts
* @returns 0 on success, something else otherwise
*/
static int
processInclude(ln_ctx ctx, const char *buf, const size_t offs)
{
int r;
const char *const conf_file_save = ctx->conf_file;
char *const fname = strdup(buf+offs);
size_t lenfname = strlen(fname);
const unsigned conf_ln_nbr_save = ctx->conf_ln_nbr;
/* trim string - not optimized but also no need to */
for(size_t i = lenfname - 1 ; i > 0 ; --i) {
if(isspace(fname[i])) {
fname[i] = '\0';
--lenfname;
}
}
CHKR(ln_loadSamples(ctx, fname));
done:
free(fname);
ctx->conf_file = conf_file_save;
ctx->conf_ln_nbr = conf_ln_nbr_save;
return r;
}
/**
* Reads a rule (sample) stored in buffer buf and creates a new ln_samp object
* out of it, which it adds to the pdag (if required).
*
* @param[ctx] ctx current library context
* @param[buf] cstr buffer containing the string contents of the sample
* @param[lenBuf] length of the sample contained within buf
* @return standard error code
*/
static int
ln_processSamp(ln_ctx ctx, const char *buf, const size_t lenBuf)
{
int r = 0;
es_str_t *typeStr = NULL;
size_t offs;
if(getLineType(buf, lenBuf, &offs, &typeStr) != 0)
goto done;
if(!es_strconstcmp(typeStr, "prefix")) {
if(getPrefix(buf, lenBuf, offs, &ctx->rulePrefix) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "extendprefix")) {
if(extendPrefix(ctx, buf, lenBuf, offs) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "rule")) {
if(processRule(ctx, buf, lenBuf, offs) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "type")) {
if(processType(ctx, buf, lenBuf, offs) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "annotate")) {
if(processAnnotate(ctx, buf, lenBuf, offs) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "include")) {
CHKR(processInclude(ctx, buf, offs));
} else {
char *str;
str = es_str2cstr(typeStr, NULL);
ln_errprintf(ctx, 0, "invalid record type detected: '%s'", str);
free(str);
goto done;
}
done:
if(typeStr != NULL)
es_deleteStr(typeStr);
return r;
}
/**
* Read a character from our sample source.
*/
static int
ln_sampReadChar(const ln_ctx ctx, FILE *const __restrict__ repo, const char **inpbuf)
{
int c;
assert((repo != NULL && inpbuf == NULL) || (repo == NULL && inpbuf != NULL));
if(repo == NULL) {
c = (**inpbuf == '\0') ? EOF : *(*inpbuf)++;
} else {
c = fgetc(repo);
}
return c;
}
/* note: comments are only supported at beginning of line! */
/* skip to end of line */
void
ln_sampSkipCommentLine(ln_ctx ctx, FILE * const __restrict__ repo, const char **inpbuf)
{
int c;
do {
c = ln_sampReadChar(ctx, repo, inpbuf);
} while(c != EOF && c != '\n');
++ctx->conf_ln_nbr;
}
/* this checks if in a multi-line rule, the next line seems to be a new
* rule, which would meand we have some unmatched percent signs inside
* our rule (what we call a "runaway rule"). This can easily happen and
* is otherwise hard to debug, so let's see if it is the case...
* @return 1 if this is a runaway rule, 0 if not
*/
int
ln_sampChkRunawayRule(ln_ctx ctx, FILE *const __restrict__ repo, const char **inpbuf)
{
int r = 1;
fpos_t fpos;
char buf[6];
int cont = 1;
int read;
fgetpos(repo, &fpos);
while(cont) {
fpos_t inner_fpos;
fgetpos(repo, &inner_fpos);
if((read = fread(buf, sizeof(char), sizeof(buf)-1, repo)) == 0) {
r = 0;
goto done;
}
if(buf[0] == '\n') {
fsetpos(repo, &inner_fpos);
if(fread(buf, sizeof(char), 1, repo)) {}; /* skip '\n' */
continue;
} else if(buf[0] == '#') {
fsetpos(repo, &inner_fpos);
const unsigned conf_ln_nbr_save = ctx->conf_ln_nbr;
ln_sampSkipCommentLine(ctx, repo, inpbuf);
ctx->conf_ln_nbr = conf_ln_nbr_save;
continue;
}
if(read != 5)
goto done; /* cannot be a rule= line! */
cont = 0; /* no comment, so we can decide */
buf[5] = '\0';
if(!strncmp(buf, "rule=", 5)) {
ln_errprintf(ctx, 0, "line has 'rule=' at begin of line, which "
"does look like a typo in the previous lines (unmatched "
"%% character) and is forbidden. If valid, please re-format "
"the rule to start with other characters. Rule ignored.");
goto done;
}
}
r = 0;
done:
fsetpos(repo, &fpos);
return r;
}
/**
* Read a rule (sample) from repository (sequentially).
*
* Reads a sample starting with the current file position and
* creates a new ln_samp object out of it, which it adds to the
* pdag.
*
* @param[in] ctx current library context
* @param[in] repo repository descriptor if file input is desired
* @param[in/out] ptr to ptr of input buffer; this is used if a string is
* provided instead of a file. If so, this pointer is advanced
* as data is consumed.
* @param[out] isEof must be set to 0 on entry and is switched to 1 if EOF occured.
* @return standard error code
*/
static int
ln_sampRead(ln_ctx ctx, FILE *const __restrict__ repo, const char **inpbuf,
int *const __restrict__ isEof)
{
int r = 0;
char buf[64*1024]; /**< max size of rule - TODO: make configurable */
size_t i = 0;
int inParser = 0;
int done = 0;
while(!done) {
const int c = ln_sampReadChar(ctx, repo, inpbuf);
if(c == EOF) {
*isEof = 1;
if(i == 0)
goto done;
else
done = 1; /* last line missing LF, still process it! */
} else if(c == '\n') {
++ctx->conf_ln_nbr;
if(inParser) {
if(ln_sampChkRunawayRule(ctx, repo, inpbuf)) {
/* ignore previous rule */
inParser = 0;
i = 0;
}
}
if(!inParser && i != 0)
done = 1;
} else if(c == '#' && i == 0) {
ln_sampSkipCommentLine(ctx, repo, inpbuf);
i = 0; /* back to beginning */
} else {
if(c == '%')
inParser = (inParser) ? 0 : 1;
buf[i++] = c;
if(i >= sizeof(buf)) {
ln_errprintf(ctx, 0, "line is too long");
goto done;
}
}
}
buf[i] = '\0';
ln_dbgprintf(ctx, "read rulebase line[~%d]: '%s'", ctx->conf_ln_nbr, buf);
CHKR(ln_processSamp(ctx, buf, i));
done:
return r;
}
/* check rulebase format version. Returns 2 if this is v2 rulebase,
* 1 for any pre-v2 and -1 if there was a problem reading the file.
*/
static int
checkVersion(FILE *const fp)
{
char buf[64];
if(fgets(buf, sizeof(buf), fp) == NULL)
return -1;
if(!strcmp(buf, "version=2\n")) {
return 2;
} else {
return 1;
}
}
/* we have a v1 rulebase, so let's do all stuff that we need
* to make that ole piece of ... work.
*/
static int
doOldCruft(ln_ctx ctx, const char *file)
{
int r = -1;
if((ctx->ptree = ln_newPTree(ctx, NULL)) == NULL) {
free(ctx);
r = -1;
goto done;
}
r = ln_v1_loadSamples(ctx, file);
done:
return r;
}
/* try to open a rulebase file. This also tries to see if we need to
* load it from some pre-configured alternative location.
* @returns open file pointer or NULL in case of error
*/
static FILE *
tryOpenRBFile(ln_ctx ctx, const char *const file)
{
FILE *repo = NULL;
if((repo = fopen(file, "r")) != NULL)
goto done;
const int eno1 = errno;
const char *const rb_lib = getenv("LIBLOGNORM_RULEBASES");
if(rb_lib == NULL || *file == '/') {
ln_errprintf(ctx, eno1, "cannot open rulebase '%s'", file);
goto done;
}
char *fname = NULL;
int len;
len = asprintf(&fname, (rb_lib[strlen(rb_lib)-1] == '/') ? "%s%s" : "%s/%s", rb_lib, file);
if(len == -1) {
ln_errprintf(ctx, errno, "alloc error: cannot open rulebase '%s'", file);
goto done;
}
if((repo = fopen(fname, "r")) == NULL) {
const int eno2 = errno;
ln_errprintf(ctx, eno1, "cannot open rulebase '%s'", file);
ln_errprintf(ctx, eno2, "also tried to locate %s via "
"rulebase directory without success. Expanded "
"name was '%s'", file, fname);
}
free(fname);
done:
return repo;
}
/* @return 0 if all is ok, 1 if an error occured */
int
ln_sampLoad(ln_ctx ctx, const char *file)
{
int r = 1;
FILE *repo;
int isEof = 0;
ln_dbgprintf(ctx, "loading rulebase file '%s'", file);
if(file == NULL) goto done;
if((repo = tryOpenRBFile(ctx, file)) == NULL)
goto done;
const int version = checkVersion(repo);
ln_dbgprintf(ctx, "rulebase version is %d\n", version);
if(version == -1) {
ln_errprintf(ctx, errno, "error determing version of %s", file);
goto done;
}
if(ctx->version != 0 && version != ctx->version) {
ln_errprintf(ctx, errno, "rulebase '%s' must be version %d, but is version %d "
" - can not be processed", file, ctx->version, version);
goto done;
}
ctx->version = version;
if(ctx->version == 1) {
fclose(repo);
r = doOldCruft(ctx, file);
goto done;
}
/* now we are in our native code */
++ctx->conf_ln_nbr; /* "version=2" is line 1! */
while(!isEof) {
CHKR(ln_sampRead(ctx, repo, NULL, &isEof));
}
fclose(repo);
r = 0;
if(ctx->include_level == 1)
ln_pdagOptimize(ctx);
done:
return r;
}
/* @return 0 if all is ok, 1 if an error occured */
int
ln_sampLoadFromString(ln_ctx ctx, const char *string)
{
int r = 1;
int isEof = 0;
if(string == NULL)
goto done;
ln_dbgprintf(ctx, "loading v2 rulebase from string '%s'", string);
ctx->version = 2;
while(!isEof) {
CHKR(ln_sampRead(ctx, NULL, &string, &isEof));
}
r = 0;
if(ctx->include_level == 1)
ln_pdagOptimize(ctx);
done:
return r;
}
liblognorm-2.0.6/src/v1_samp.h 0000644 0001750 0001750 00000006370 13273030617 013122 0000000 0000000 /**
* @file samples.h
* @brief Object to process log samples.
* @author Rainer Gerhards
*
* This object handles log samples, and in actual log sample files.
* It co-operates with the ptree object to build the actual parser tree.
*//*
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010 by Rainer Gerhards and Adiscon GmbH.
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_V1_SAMPLES_H_INCLUDED
#define LIBLOGNORM_V1_SAMPLES_H_INCLUDED
#include /* we need es_size_t */
#include
/**
* A single log sample.
*/
struct ln_v1_samp {
es_str_t *msg;
};
/**
* Reads a sample stored in buffer buf and creates a new ln_v1_samp object
* out of it.
*
* @note
* It is the caller's responsibility to delete the newly
* created ln_v1_samp object if it is no longer needed.
*
* @param[ctx] ctx current library context
* @param[buf] cstr buffer containing the string contents of the sample
* @param[lenBuf] length of the sample contained within buf
* @return Newly create object or NULL if an error occured.
*/
struct ln_v1_samp *
ln_v1_processSamp(ln_ctx ctx, const char *buf, es_size_t lenBuf);
/**
* Read a sample from repository (sequentially).
*
* Reads a sample starting with the current file position and
* creates a new ln_v1_samp object out of it.
*
* @note
* It is the caller's responsibility to delete the newly
* created ln_v1_samp object if it is no longer needed.
*
* @param[in] ctx current library context
* @param[in] repo repository descriptor
* @param[out] isEof must be set to 0 on entry and is switched to 1 if EOF occured.
* @return Newly create object or NULL if an error or EOF occured.
*/
struct ln_v1_samp *
ln_v1_sampRead(ln_ctx ctx, FILE *repo, int *isEof);
/**
* Free ln_v1_samp object.
*/
void
ln_v1_sampFree(ln_ctx ctx, struct ln_v1_samp *samp);
/**
* Parse a given sample
*
* @param[in] ctx current library context
* @param[in] rule string (with prefix and suffix '%' markers)
* @param[in] offset in rule-string to start at (it should be pointed to
* starting character: '%')
* @param[in] temp string buffer(working space),
* externalized for efficiency reasons
* @param[out] return code (0 means success)
* @return newly created node, which can be added to sample tree.
*/
ln_fieldList_t*
ln_v1_parseFieldDescr(ln_ctx ctx, es_str_t *rule, es_size_t *bufOffs,
es_str_t **str, int* ret);
#endif /* #ifndef LIBLOGNORM_V1_SAMPLES_H_INCLUDED */
liblognorm-2.0.6/src/liblognorm.h 0000644 0001750 0001750 00000022406 13273030617 013716 0000000 0000000 /**
* @file liblognorm.h
* @brief The public liblognorm API.
*
* Functions other than those defined here MUST not be called by
* a liblognorm "user" application.
*
* This file is meant to be included by applications using liblognorm.
* For lognorm library files themselves, include "lognorm.h".
*//**
* @mainpage
* Liblognorm is an easy to use and fast samples-based log normalization
* library.
*
* It can be passed a stream of arbitrary log messages, one at a time, and for
* each message it will output well-defined name-value pairs and a set of
* tags describing the message.
*
* For further details, see it's initial announcement available at
* http://blog.gerhards.net/2010/10/introducing-liblognorm.html
*
* The public interface of this library is describe in liblognorm.h.
*
* Liblognorm fully supports Unicode. Like most Linux tools, it operates
* on UTF-8 natively, called "passive mode". This was decided because we
* so can keep the size of data structures small while still supporting
* all of the world's languages (actually more than when we did UCS-2).
*
* At the technical level, we can handle UTF-8 multibyte sequences transparently.
* Liblognorm needs to look at a few US-ASCII characters to do the
* sample base parsing (things to indicate fields), so this is no
* issue. Inside the parse tree, a multibyte sequence can simple be processed
* as if it were a sequence of different characters that make up a their
* own symbol each. In fact, this even allows for somewhat greater parsing
* speed.
*//*
*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2017 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_H_INCLUDED
#define LIBLOGNORM_H_INCLUDED
#include /* we need size_t */
#include
/* error codes */
#define LN_NOMEM -1
#define LN_INVLDFDESCR -1
#define LN_BADCONFIG -250
#define LN_BADPARSERSTATE -500
#define LN_WRONGPARSER -1000
#define LN_RB_LINE_TOO_LONG -1001
#define LN_OVER_SIZE_LIMIT -1002
/**
* The library context descriptor.
* This is used to permit multiple independent instances of the
* library to be called within a single program. This is most
* useful for plugin-based architectures.
*/
typedef struct ln_ctx_s* ln_ctx;
/* API */
/**
* Return library version string.
*
* Returns the version of the currently used library.
*
* @return Zero-Terminated library version string.
*/
/* Note: this MUST NOT be inline to make sure the actual library
* has the right version, not just what was used to compile!
*/
const char *ln_version(void);
/**
* Return if library is build with advanced statistics
* activated.
*
* @return 1 if advanced stats are active, 0 if not
*/
int ln_hasAdvancedStats(void);
/**
* Initialize a library context.
*
* To prevent memory leaks, ln_exitCtx() must be called on a library
* context that is no longer needed.
*
* @return new library context or NULL if an error occured
*/
ln_ctx ln_initCtx(void);
/**
* Inherit control attributes from a library context.
*
* This does not copy the parse-tree, but does copy
* behaviour-controling attributes such as enableRegex.
*
* Just as with ln_initCtx, ln_exitCtx() must be called on a library
* context that is no longer needed.
*
* @return new library context or NULL if an error occured
*/
ln_ctx ln_inherittedCtx(ln_ctx parent);
/**
* Discard a library context.
*
* Free's the ressources associated with the given library context. It
* MUST NOT be accessed after calling this function.
*
* @param ctx The context to be discarded.
*
* @return Returns zero on success, something else otherwise.
*/
int ln_exitCtx(ln_ctx ctx);
/* binary values, so that we can "or" them together */
#define LN_CTXOPT_ALLOW_REGEX 0x01 /**< permit regex matching */
#define LN_CTXOPT_ADD_EXEC_PATH 0x02 /**< add exec_path attribute (time-consuming!) */
#define LN_CTXOPT_ADD_ORIGINALMSG 0x04 /**< always add original message to output
(not just in error case) */
#define LN_CTXOPT_ADD_RULE 0x08 /**< add mockup rule */
#define LN_CTXOPT_ADD_RULE_LOCATION 0x10 /**< add rule location (file, lineno) to metadata */
/**
* Set options on ctx.
*
* @param ctx The context to be modified.
* @param opts a potentially or-ed list of options, see LN_CTXOPT_*
*/
void
ln_setCtxOpts(ln_ctx ctx, unsigned opts);
/**
* Set a debug message handler (callback).
*
* Liblognorm can provide helpful information for debugging
* - it's internal processing
* - the way a log message is being normalized
*
* It does so by emiting "interesting" information about its processing
* at various stages. A caller can obtain this information by registering
* an entry point. When done so, liblognorm will call the entry point
* whenever it has something to emit. Note that debugging can be rather
* verbose.
*
* The callback will be called with the following three parameters in that order:
* - the caller-provided cookie
* - a zero-terminated string buffer
* - the length of the string buffer, without the trailing NUL byte
*
* @note
* The provided callback function must not call any liblognorm
* APIs except when specifically flagged as safe for calling by a debug
* callback handler.
*
* @param[in] ctx The library context to apply callback to.
* @param[in] cb The function to be called for debugging
* @param[in] cookie Opaque cookie to be passed down to debug handler. Can be
* used for some state tracking by the caller. This is defined as
* void* to support pointers. To play it safe, a pointer should be
* passed (but advantorous folks may also use an unsigned).
*
* @return Returns zero on success, something else otherwise.
*/
int ln_setDebugCB(ln_ctx ctx, void (*cb)(void*, const char*, size_t), void *cookie);
/**
* Set a error message handler (callback).
*
* If set, this is used to emit error messages of interest to the user, e.g.
* on failures during rulebase load. It is suggested that a caller uses this
* feedback to aid its users in resolving issues.
* Its semantics are otherwise exactly the same like ln_setDebugCB().
*/
int ln_setErrMsgCB(ln_ctx ctx, void (*cb)(void*, const char*, size_t), void *cookie);
/**
* enable or disable debug mode.
*
* @param[in] ctx context
* @param[in] b boolean 0 - disable debug mode, 1 - enable debug mode
*/
void ln_enableDebug(ln_ctx ctx, int i);
/**
* Load a (log) sample file.
*
* The file must contain log samples in syntactically correct format. Samples are added
* to set already loaded in the current context. If there is a sample with duplicate
* semantics, this sample will be ignored. Most importantly, this can \b not be used
* to change tag assignments for a given sample.
*
* @param[in] ctx The library context to apply callback to.
* @param[in] file Name of file to be loaded.
*
* @return Returns zero on success, something else otherwise.
*/
int ln_loadSamples(ln_ctx ctx, const char *file);
/**
* Load a rulebase via a string.
*
* Note: this can only load v2 samples, v1 is NOT supported.
*
* @param[in] ctx The library context to apply callback to.
* @param[in] string The string with the actual rulebase.
*
* @return Returns zero on success, something else otherwise.
*/
int ln_loadSamplesFromString(ln_ctx ctx, const char *string);
/**
* Normalize a message.
*
* This is the main library entry point. It is called with a message
* to normalize and will return a normalized in-memory representation
* of it.
*
* If an error occurs, the function returns -1. In that case, an
* in-memory event representation has been generated if event is
* non-NULL. In that case, the event contains further error details in
* normalized form.
*
* @note
* This function works on byte-counted strings and as such is able to
* process NUL bytes if they occur inside the message. On the other hand,
* this means the the correct messages size, \b excluding the NUL byte,
* must be provided.
*
* @param[in] ctx The library context to use.
* @param[in] str The message string (see note above).
* @param[in] strLen The length of the message in bytes.
* @param[out] json_p A new event record or NULL if an error occured. Must be
* destructed if no longer needed.
*
* @return Returns zero on success, something else otherwise.
*/
int ln_normalize(ln_ctx ctx, const char *str, const size_t strLen, struct json_object **json_p);
#endif /* #ifndef LOGNORM_H_INCLUDED */
liblognorm-2.0.6/src/v1_samp.c 0000644 0001750 0001750 00000055744 13273030617 013126 0000000 0000000 /* samp.c -- code for ln_v1_samp objects.
*
* Copyright 2010-2015 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#define LOGNORM_V1_SUBSYSTEM /* indicate we are old cruft */
#include "v1_liblognorm.h"
#include "internal.h"
#include "lognorm.h"
#include "samp.h"
#include "v1_ptree.h"
#include "v1_samp.h"
#include "v1_parser.h"
/**
* Construct a sample object.
*/
struct ln_v1_samp*
ln_v1_sampCreate(ln_ctx __attribute__((unused)) ctx)
{
struct ln_v1_samp* samp;
if((samp = calloc(1, sizeof(struct ln_v1_samp))) == NULL)
goto done;
/* place specific init code here (none at this time) */
done: return samp;
}
void
ln_v1_sampFree(ln_ctx __attribute__((unused)) ctx, struct ln_v1_samp *samp)
{
free(samp);
}
/**
* Extract a field description from a sample.
* The field description is added to the tail of the current
* subtree's field list. The parse buffer must be position on the
* leading '%' that starts a field definition. It is a program error
* if this condition is not met.
*
* Note that we break up the object model and access ptree members
* directly. Let's consider us a friend of ptree. This is necessary
* to optimize the structure for a high-speed parsing process.
*
* @param[in] str a temporary work string. This is passed in to save the
* creation overhead
* @returns 0 on success, something else otherwise
*/
static int
addFieldDescr(ln_ctx ctx, struct ln_ptree **subtree, es_str_t *rule,
es_size_t *bufOffs, es_str_t **str)
{
int r;
ln_fieldList_t *node = ln_v1_parseFieldDescr(ctx, rule, bufOffs, str, &r);
assert(subtree != NULL);
if (node != NULL) CHKR(ln_addFDescrToPTree(subtree, node));
done:
return r;
}
ln_fieldList_t*
ln_v1_parseFieldDescr(ln_ctx ctx, es_str_t *rule, es_size_t *bufOffs, es_str_t **str, int* ret)
{
int r = 0;
ln_fieldList_t *node;
es_size_t i = *bufOffs;
char *cstr; /* for debug mode strings */
unsigned char *buf;
es_size_t lenBuf;
void* (*constructor_fn)(ln_fieldList_t *, ln_ctx) = NULL;
buf = es_getBufAddr(rule);
lenBuf = es_strlen(rule);
assert(buf[i] == '%');
++i; /* "eat" ':' */
CHKN(node = calloc(1, sizeof(ln_fieldList_t)));
node->subtree = NULL;
node->next = NULL;
node->data = NULL;
node->raw_data = NULL;
node->parser_data = NULL;
node->parser_data_destructor = NULL;
CHKN(node->name = es_newStr(16));
/* skip leading whitespace in field name */
while(i < lenBuf && isspace(buf[i]))
++i;
while(i < lenBuf && buf[i] != ':') {
CHKR(es_addChar(&node->name, buf[i++]));
}
if(es_strlen(node->name) == 0) {
FAIL(LN_INVLDFDESCR);
}
if(ctx->debug) {
cstr = es_str2cstr(node->name, NULL);
ln_dbgprintf(ctx, "parsed field: '%s'", cstr);
free(cstr);
}
if(buf[i] != ':') {
/* may be valid later if we have a loaded CEE dictionary
* and the name is present inside it.
*/
FAIL(LN_INVLDFDESCR);
}
++i; /* skip ':' */
/* parse and process type (trailing whitespace must be trimmed) */
es_emptyStr(*str);
size_t j = i;
/* scan for terminator */
while(j < lenBuf && buf[j] != ':' && buf[j] != '%')
++j;
/* now trim trailing space backwards */
size_t next = j;
--j;
while(j >= i && isspace(buf[j]))
--j;
/* now copy */
while(i <= j) {
CHKR(es_addChar(str, buf[i++]));
}
/* finally move i to consumed position */
i = next;
if(i == lenBuf) {
FAIL(LN_INVLDFDESCR);
}
node->isIPTables = 0; /* first assume no special parser is used */
if(!es_strconstcmp(*str, "date-rfc3164")) {
node->parser = ln_parseRFC3164Date;
} else if(!es_strconstcmp(*str, "date-rfc5424")) {
node->parser = ln_parseRFC5424Date;
} else if(!es_strconstcmp(*str, "number")) {
node->parser = ln_parseNumber;
} else if(!es_strconstcmp(*str, "float")) {
node->parser = ln_parseFloat;
} else if(!es_strconstcmp(*str, "hexnumber")) {
node->parser = ln_parseHexNumber;
} else if(!es_strconstcmp(*str, "kernel-timestamp")) {
node->parser = ln_parseKernelTimestamp;
} else if(!es_strconstcmp(*str, "whitespace")) {
node->parser = ln_parseWhitespace;
} else if(!es_strconstcmp(*str, "ipv4")) {
node->parser = ln_parseIPv4;
} else if(!es_strconstcmp(*str, "ipv6")) {
node->parser = ln_parseIPv6;
} else if(!es_strconstcmp(*str, "word")) {
node->parser = ln_parseWord;
} else if(!es_strconstcmp(*str, "alpha")) {
node->parser = ln_parseAlpha;
} else if(!es_strconstcmp(*str, "rest")) {
node->parser = ln_parseRest;
} else if(!es_strconstcmp(*str, "op-quoted-string")) {
node->parser = ln_parseOpQuotedString;
} else if(!es_strconstcmp(*str, "quoted-string")) {
node->parser = ln_parseQuotedString;
} else if(!es_strconstcmp(*str, "date-iso")) {
node->parser = ln_parseISODate;
} else if(!es_strconstcmp(*str, "time-24hr")) {
node->parser = ln_parseTime24hr;
} else if(!es_strconstcmp(*str, "time-12hr")) {
node->parser = ln_parseTime12hr;
} else if(!es_strconstcmp(*str, "duration")) {
node->parser = ln_parseDuration;
} else if(!es_strconstcmp(*str, "cisco-interface-spec")) {
node->parser = ln_parseCiscoInterfaceSpec;
} else if(!es_strconstcmp(*str, "json")) {
node->parser = ln_parseJSON;
} else if(!es_strconstcmp(*str, "cee-syslog")) {
node->parser = ln_parseCEESyslog;
} else if(!es_strconstcmp(*str, "mac48")) {
node->parser = ln_parseMAC48;
} else if(!es_strconstcmp(*str, "name-value-list")) {
node->parser = ln_parseNameValue;
} else if(!es_strconstcmp(*str, "cef")) {
node->parser = ln_parseCEF;
} else if(!es_strconstcmp(*str, "checkpoint-lea")) {
node->parser = ln_parseCheckpointLEA;
} else if(!es_strconstcmp(*str, "v2-iptables")) {
node->parser = ln_parsev2IPTables;
} else if(!es_strconstcmp(*str, "iptables")) {
node->parser = NULL;
node->isIPTables = 1;
} else if(!es_strconstcmp(*str, "string-to")) {
/* TODO: check extra data!!!! (very important) */
node->parser = ln_parseStringTo;
} else if(!es_strconstcmp(*str, "char-to")) {
/* TODO: check extra data!!!! (very important) */
node->parser = ln_parseCharTo;
} else if(!es_strconstcmp(*str, "char-sep")) {
/* TODO: check extra data!!!! (very important) */
node->parser = ln_parseCharSeparated;
} else if(!es_strconstcmp(*str, "tokenized")) {
node->parser = ln_parseTokenized;
constructor_fn = tokenized_parser_data_constructor;
node->parser_data_destructor = tokenized_parser_data_destructor;
}
#ifdef FEATURE_REGEXP
else if(!es_strconstcmp(*str, "regex")) {
node->parser = ln_parseRegex;
constructor_fn = regex_parser_data_constructor;
node->parser_data_destructor = regex_parser_data_destructor;
}
#endif
else if (!es_strconstcmp(*str, "recursive")) {
node->parser = ln_parseRecursive;
constructor_fn = recursive_parser_data_constructor;
node->parser_data_destructor = recursive_parser_data_destructor;
} else if (!es_strconstcmp(*str, "descent")) {
node->parser = ln_parseRecursive;
constructor_fn = descent_parser_data_constructor;
node->parser_data_destructor = recursive_parser_data_destructor;
} else if (!es_strconstcmp(*str, "interpret")) {
node->parser = ln_parseInterpret;
constructor_fn = interpret_parser_data_constructor;
node->parser_data_destructor = interpret_parser_data_destructor;
} else if (!es_strconstcmp(*str, "suffixed")) {
node->parser = ln_parseSuffixed;
constructor_fn = suffixed_parser_data_constructor;
node->parser_data_destructor = suffixed_parser_data_destructor;
} else if (!es_strconstcmp(*str, "named_suffixed")) {
node->parser = ln_parseSuffixed;
constructor_fn = named_suffixed_parser_data_constructor;
node->parser_data_destructor = suffixed_parser_data_destructor;
} else {
cstr = es_str2cstr(*str, NULL);
ln_errprintf(ctx, 0, "invalid field type '%s'", cstr);
free(cstr);
FAIL(LN_INVLDFDESCR);
}
if(buf[i] == '%') {
i++;
} else {
/* parse extra data */
CHKN(node->data = es_newStr(8));
i++;
while(i < lenBuf) {
if(buf[i] == '%') {
++i;
break; /* end of field */
}
CHKR(es_addChar(&node->data, buf[i++]));
}
node->raw_data = es_strdup(node->data);
es_unescapeStr(node->data);
if(ctx->debug) {
cstr = es_str2cstr(node->data, NULL);
ln_dbgprintf(ctx, "parsed extra data: '%s'", cstr);
free(cstr);
}
}
if (constructor_fn) node->parser_data = constructor_fn(node, ctx);
*bufOffs = i;
done:
if (r != 0) {
if (node->name != NULL) es_deleteStr(node->name);
free(node);
node = NULL;
}
*ret = r;
return node;
}
/**
* Parse a Literal string out of the template and add it to the tree.
* @param[in] ctx the context
* @param[in/out] subtree on entry, current subtree, on exist newest
* deepest subtree
* @param[in] rule string with current rule
* @param[in/out] bufOffs parse pointer, up to which offset is parsed
* (is updated so that it points to first char after consumed
* string on exit).
* @param[out] str literal extracted (is empty, when no litral could be found)
* @return 0 on success, something else otherwise
*/
static int
parseLiteral(ln_ctx ctx, struct ln_ptree **subtree, es_str_t *rule,
es_size_t *bufOffs, es_str_t **str)
{
int r = 0;
es_size_t i = *bufOffs;
unsigned char *buf;
es_size_t lenBuf;
es_emptyStr(*str);
buf = es_getBufAddr(rule);
lenBuf = es_strlen(rule);
/* extract maximum length literal */
while(i < lenBuf) {
if(buf[i] == '%') {
if(i+1 < lenBuf && buf[i+1] != '%') {
break; /* field start is end of literal */
}
if (++i == lenBuf) break;
}
CHKR(es_addChar(str, buf[i]));
++i;
}
es_unescapeStr(*str);
if(ctx->debug) {
char *cstr = es_str2cstr(*str, NULL);
ln_dbgprintf(ctx, "parsed literal: '%s'", cstr);
free(cstr);
}
*subtree = ln_buildPTree(*subtree, *str, 0);
*bufOffs = i;
r = 0;
done: return r;
}
/* Implementation note:
* We read in the sample, and split it into chunks of literal text and
* fields. Each literal text is added as whole to the tree, as is each
* field individually. To do so, we keep track of our current subtree
* root, which changes whenever a new part of the tree is build. It is
* set to the then-lowest part of the tree, where the next step sample
* data is to be added.
*
* This function processes the whole string or returns an error.
*
* format: literal1%field:type:extra-data%literal2
*
* @returns the new subtree root (or NULL in case of error)
*/
static int
addSampToTree(ln_ctx ctx, es_str_t *rule, struct json_object *tagBucket)
{
int r = -1;
struct ln_ptree* subtree;
es_str_t *str = NULL;
es_size_t i;
subtree = ctx->ptree;
CHKN(str = es_newStr(256));
i = 0;
while(i < es_strlen(rule)) {
ln_dbgprintf(ctx, "addSampToTree %d of %d", i, es_strlen(rule));
CHKR(parseLiteral(ctx, &subtree, rule, &i, &str));
/* After the literal there can be field only*/
if (i < es_strlen(rule)) {
CHKR(addFieldDescr(ctx, &subtree, rule, &i, &str));
if (i == es_strlen(rule)) {
/* finish the tree with empty literal to avoid false merging*/
CHKR(parseLiteral(ctx, &subtree, rule, &i, &str));
}
}
}
ln_dbgprintf(ctx, "end addSampToTree %d of %d", i, es_strlen(rule));
/* we are at the end of rule processing, so this node is a terminal */
subtree->flags.isTerminal = 1;
subtree->tags = tagBucket;
done:
if(str != NULL)
es_deleteStr(str);
return r;
}
/**
* get the initial word of a rule line that tells us the type of the
* line.
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[out] offs offset after "="
* @param[out] str string with "linetype-word" (newly created)
* @returns 0 on success, something else otherwise
*/
static int
getLineType(const char *buf, es_size_t lenBuf, es_size_t *offs, es_str_t **str)
{
int r = -1;
es_size_t i;
*str = es_newStr(16);
for(i = 0 ; i < lenBuf && buf[i] != '=' ; ++i) {
CHKR(es_addChar(str, buf[i]));
}
if(i < lenBuf)
++i; /* skip over '=' */
*offs = i;
done: return r;
}
/**
* Get a new common prefix from the config file. That is actually everything from
* the current offset to the end of line.
*
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset after "="
* @param[in/out] str string to store common offset. If NULL, it is created,
* otherwise it is emptied.
* @returns 0 on success, something else otherwise
*/
static int
getPrefix(const char *buf, es_size_t lenBuf, es_size_t offs, es_str_t **str)
{
int r;
if(*str == NULL) {
CHKN(*str = es_newStr(lenBuf - offs));
} else {
es_emptyStr(*str);
}
r = es_addBuf(str, (char*)buf + offs, lenBuf - offs);
done: return r;
}
/**
* Extend the common prefix. This means that the line is concatenated
* to the prefix. This is useful if the same rulebase is to be used with
* different prefixes (well, not strictly necessary, but probably useful).
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset to-be-added text starts
* @returns 0 on success, something else otherwise
*/
static int
extendPrefix(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t offs)
{
return es_addBuf(&ctx->rulePrefix, (char*)buf+offs, lenBuf - offs);
}
/**
* Add a tag to the tag bucket. Helper to processTags.
* @param[in] ctx current context
* @param[in] tagname string with tag name
* @param[out] tagBucket tagbucket to which new tags shall be added
* the tagbucket is created if it is NULL
* @returns 0 on success, something else otherwise
*/
static int
addTagStrToBucket(ln_ctx ctx, es_str_t *tagname, struct json_object **tagBucket)
{
int r = -1;
char *cstr;
struct json_object *tag;
if(*tagBucket == NULL) {
CHKN(*tagBucket = json_object_new_array());
}
cstr = es_str2cstr(tagname, NULL);
ln_dbgprintf(ctx, "tag found: '%s'", cstr);
CHKN(tag = json_object_new_string(cstr));
json_object_array_add(*tagBucket, tag);
free(cstr);
r = 0;
done: return r;
}
/**
* Extract the tags and create a tag bucket out of them
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in,out] poffs offset where tags start, on exit and success
* offset after tag part (excluding ':')
* @param[out] tagBucket tagbucket to which new tags shall be added
* the tagbucket is created if it is NULL
* @returns 0 on success, something else otherwise
*/
static int
processTags(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t *poffs, struct json_object **tagBucket)
{
int r = -1;
es_str_t *str = NULL;
es_size_t i;
assert(poffs != NULL);
i = *poffs;
while(i < lenBuf && buf[i] != ':') {
if(buf[i] == ',') {
/* end of this tag */
CHKR(addTagStrToBucket(ctx, str, tagBucket));
es_deleteStr(str);
str = NULL;
} else {
if(str == NULL) {
CHKN(str = es_newStr(32));
}
CHKR(es_addChar(&str, buf[i]));
}
++i;
}
if(buf[i] != ':')
goto done;
++i; /* skip ':' */
if(str != NULL) {
CHKR(addTagStrToBucket(ctx, str, tagBucket));
es_deleteStr(str);
}
*poffs = i;
r = 0;
done: return r;
}
/**
* Process a new rule and add it to tree.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset where rule starts
* @returns 0 on success, something else otherwise
*/
static int
processRule(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t offs)
{
int r = -1;
es_str_t *str;
struct json_object *tagBucket = NULL;
ln_dbgprintf(ctx, "sample line to add: '%s'\n", buf+offs);
CHKR(processTags(ctx, buf, lenBuf, &offs, &tagBucket));
if(offs == lenBuf) {
ln_dbgprintf(ctx, "error, actual message sample part is missing");
// TODO: provide some error indicator to app? We definitely must do (a callback?)
goto done;
}
if(ctx->rulePrefix == NULL) {
CHKN(str = es_newStr(lenBuf));
} else {
CHKN(str = es_strdup(ctx->rulePrefix));
}
CHKR(es_addBuf(&str, (char*)buf + offs, lenBuf - offs));
addSampToTree(ctx, str, tagBucket);
es_deleteStr(str);
r = 0;
done: return r;
}
/**
* Obtain a field name from a rule base line.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in/out] offs on entry: offset where tag starts,
* on exit: updated offset AFTER TAG and (':')
* @param [out] strTag obtained tag, if successful
* @returns 0 on success, something else otherwise
*/
static int
getFieldName(ln_ctx __attribute__((unused)) ctx, const char *buf, es_size_t lenBuf, es_size_t *offs,
es_str_t **strTag)
{
int r = -1;
es_size_t i;
i = *offs;
while(i < lenBuf &&
(isalnum(buf[i]) || buf[i] == '_' || buf[i] == '.')) {
if(*strTag == NULL) {
CHKN(*strTag = es_newStr(32));
}
CHKR(es_addChar(strTag, buf[i]));
++i;
}
*offs = i;
r = 0;
done: return r;
}
/**
* Skip over whitespace.
* Skips any whitespace present at the offset.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in/out] offs on entry: offset first unprocessed position
*/
static void
skipWhitespace(ln_ctx __attribute__((unused)) ctx, const char *buf, es_size_t lenBuf, es_size_t *offs)
{
while(*offs < lenBuf && isspace(buf[*offs])) {
(*offs)++;
}
}
/**
* Obtain an annotation (field) operation.
* This usually is a plus or minus sign followed by a field name
* followed (if plus) by an equal sign and the field value. On entry,
* offs must be positioned on the first unprocessed field (after ':' for
* the initial field!). Extra whitespace is detected and, if present,
* skipped. The obtained operation is added to the annotation set provided.
* Note that extracted string objects are passed to the annotation; thus it
* is vital NOT to free them (most importantly, this is *not* a memory leak).
*
* @param[in] ctx current context
* @param[in] annot active annotation set to which the operation is to be added
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in/out] offs on entry: offset where tag starts,
* on exit: updated offset AFTER TAG and (':')
* @param [out] strTag obtained tag, if successful
* @returns 0 on success, something else otherwise
*/
static int
getAnnotationOp(ln_ctx ctx, ln_annot *annot, const char *buf, es_size_t lenBuf, es_size_t *offs)
{
int r = -1;
es_size_t i;
es_str_t *fieldName = NULL;
es_str_t *fieldVal = NULL;
ln_annot_opcode opc;
i = *offs;
skipWhitespace(ctx, buf, lenBuf, &i);
if(i == lenBuf) {
r = 0;
goto done; /* nothing left to process (no error!) */
}
if(buf[i] == '+') {
opc = ln_annot_ADD;
} else if(buf[i] == '-') {
ln_dbgprintf(ctx, "annotate op '-' not yet implemented - failing");
goto fail;
} else {
ln_dbgprintf(ctx, "invalid annotate opcode '%c' - failing" , buf[i]);
goto fail;
}
i++;
if(i == lenBuf) goto fail; /* nothing left to process */
CHKR(getFieldName(ctx, buf, lenBuf, &i, &fieldName));
if(i == lenBuf) goto fail; /* nothing left to process */
if(buf[i] != '=') goto fail; /* format error */
i++;
skipWhitespace(ctx, buf, lenBuf, &i);
if(buf[i] != '"') goto fail; /* format error */
++i;
while(i < lenBuf && buf[i] != '"') {
if(fieldVal == NULL) {
CHKN(fieldVal = es_newStr(32));
}
CHKR(es_addChar(&fieldVal, buf[i]));
++i;
}
*offs = (i == lenBuf) ? i : i+1;
CHKR(ln_addAnnotOp(annot, opc, fieldName, fieldVal));
r = 0;
done: return r;
fail: return -1;
}
/**
* Process a new annotation and add it to the annotation set.
*
* @param[in] ctx current context
* @param[in] buf line buffer
* @param[in] len length of buffer
* @param[in] offs offset where annotation starts
* @returns 0 on success, something else otherwise
*/
static int
processAnnotate(ln_ctx ctx, const char *buf, es_size_t lenBuf, es_size_t offs)
{
int r;
es_str_t *tag = NULL;
ln_annot *annot;
ln_dbgprintf(ctx, "sample annotation to add: '%s'", buf+offs);
CHKR(getFieldName(ctx, buf, lenBuf, &offs, &tag));
skipWhitespace(ctx, buf, lenBuf, &offs);
if(buf[offs] != ':' || tag == NULL) {
ln_dbgprintf(ctx, "invalid tag field in annotation, line is '%s'", buf);
r=-1;
goto done;
}
++offs;
/* we got an annotation! */
CHKN(annot = ln_newAnnot(tag));
while(offs < lenBuf) {
CHKR(getAnnotationOp(ctx, annot, buf, lenBuf, &offs));
}
r = ln_addAnnotToSet(ctx->pas, annot);
done: return r;
}
struct ln_v1_samp *
ln_v1_processSamp(ln_ctx ctx, const char *buf, es_size_t lenBuf)
{
struct ln_v1_samp *samp = NULL;
es_str_t *typeStr = NULL;
es_size_t offs;
if(getLineType(buf, lenBuf, &offs, &typeStr) != 0)
goto done;
if(!es_strconstcmp(typeStr, "prefix")) {
if(getPrefix(buf, lenBuf, offs, &ctx->rulePrefix) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "extendprefix")) {
if(extendPrefix(ctx, buf, lenBuf, offs) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "rule")) {
if(processRule(ctx, buf, lenBuf, offs) != 0) goto done;
} else if(!es_strconstcmp(typeStr, "annotate")) {
if(processAnnotate(ctx, buf, lenBuf, offs) != 0) goto done;
} else {
/* TODO error reporting */
char *str;
str = es_str2cstr(typeStr, NULL);
ln_dbgprintf(ctx, "invalid record type detected: '%s'", str);
free(str);
goto done;
}
done:
if(typeStr != NULL)
es_deleteStr(typeStr);
return samp;
}
struct ln_v1_samp *
ln_v1_sampRead(ln_ctx ctx, FILE *const __restrict__ repo, int *const __restrict__ isEof)
{
struct ln_v1_samp *samp = NULL;
char buf[10*1024]; /**< max size of rule - TODO: make configurable */
size_t i = 0;
int inParser = 0;
int done = 0;
while(!done) {
int c = fgetc(repo);
if(c == EOF) {
*isEof = 1;
if(i == 0)
goto done;
else
done = 1; /* last line missing LF, still process it! */
} else if(c == '\n') {
++ctx->conf_ln_nbr;
if(inParser) {
if(ln_sampChkRunawayRule(ctx, repo, NULL)) {
/* ignore previous rule */
inParser = 0;
i = 0;
}
}
if(!inParser && i != 0)
done = 1;
} else if(c == '#' && i == 0) {
ln_sampSkipCommentLine(ctx, repo, NULL);
i = 0; /* back to beginning */
} else {
if(c == '%')
inParser = (inParser) ? 0 : 1;
buf[i++] = c;
if(i >= sizeof(buf)) {
ln_errprintf(ctx, 0, "line is too long");
goto done;
}
}
}
buf[i] = '\0';
ln_dbgprintf(ctx, "read rulebase line[~%d]: '%s'", ctx->conf_ln_nbr, buf);
ln_v1_processSamp(ctx, buf, i);
ln_dbgprintf(ctx, "---------------------------------------");
ln_displayPTree(ctx->ptree, 0);
ln_dbgprintf(ctx, "=======================================");
done:
return samp;
}
liblognorm-2.0.6/src/parser.c 0000644 0001750 0001750 00000262524 13370250152 013044 0000000 0000000 /*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2018 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include "liblognorm.h"
#include "lognorm.h"
#include "internal.h"
#include "parser.h"
#include "samp.h"
#include "helpers.h"
#ifdef FEATURE_REGEXP
#include
#include
#endif
/* how should output values be formatted? */
enum FMT_MODE {
FMT_AS_STRING = 0,
FMT_AS_NUMBER = 1,
FMT_AS_TIMESTAMP_UX = 2,
FMT_AS_TIMESTAMP_UX_MS = 3
};
/* some helpers */
static inline int
hParseInt(const unsigned char **buf, size_t *lenBuf)
{
const unsigned char *p = *buf;
size_t len = *lenBuf;
int i = 0;
while(len > 0 && myisdigit(*p)) {
i = i * 10 + *p - '0';
++p;
--len;
}
*buf = p;
*lenBuf = len;
return i;
}
/* parser _parse interface
*
* All parsers receive
*
* @param[in] npb->str the to-be-parsed string
* @param[in] npb->strLen length of the to-be-parsed string
* @param[in] offs an offset into the string
* @param[in] pointer to parser data block
* @param[out] parsed bytes
* @param[out] value ptr to json object containing parsed data
* (can be unused, but if used *value MUST be NULL on entry)
*
* They will try to parse out "their" object from the string. If they
* succeed, they:
*
* return 0 on success and LN_WRONGPARSER if this parser could
* not successfully parse (but all went well otherwise) and something
* else in case of an error.
*/
#define PARSER_Parse(ParserName) \
int ln_v2_parse##ParserName( \
npb_t *const npb, \
size_t *const offs, \
__attribute__((unused)) void *const pdata, \
size_t *parsed, \
struct json_object **value) \
{ \
int r = LN_WRONGPARSER; \
*parsed = 0;
#define FAILParser \
goto parserdone; /* suppress warnings */ \
parserdone: \
r = 0; \
goto done; /* suppress warnings */ \
done:
#define ENDFailParser \
return r; \
}
/* Return printable representation of parser content for
* display purposes. This must not be 100% exact, but provide
* a good indication of what it contains for a human.
* @param[data] data parser data block
* @return pointer to c string, NOT to be freed
*/
#define PARSER_DataForDisplay(ParserName) \
const char * ln_DataForDisplay##ParserName(__attribute__((unused)) ln_ctx ctx, void *const pdata)
/* Return JSON parser config. This is primarily for comparison
* of parser equalness.
* @param[data] data parser data block
* @return pointer to c string, NOT to be freed
*/
#define PARSER_JsonConf(ParserName) \
const char * ln_JsonConf##ParserName(__attribute__((unused)) ln_ctx ctx, void *const pdata)
/* parser constructor
* @param[in] json config json items
* @param[out] data parser data block (to be allocated)
* At minimum, *data must be set to NULL
* @return error status (0 == OK)
*/
#define PARSER_Construct(ParserName) \
int ln_construct##ParserName( \
__attribute__((unused)) ln_ctx ctx, \
__attribute__((unused)) json_object *const json, \
void **pdata)
/* parser destructor
* @param[data] data parser data block (to be de-allocated)
*/
#define PARSER_Destruct(ParserName) \
void ln_destruct##ParserName(__attribute__((unused)) ln_ctx ctx, void *const pdata)
/* the following table saves us from computing an additional date to get
* the ordinal day of the year - at least from 1967-2099
* Note: non-2038+ compliant systems (Solaris) will generate compiler
* warnings on the post 2038-rollover years.
*/
static const int yearInSec_startYear = 1967;
/* for x in $(seq 1967 2099) ; do
* printf %s', ' $(date --date="Dec 31 ${x} UTC 23:59:59" +%s)
* done |fold -w 70 -s */
static const time_t yearInSecs[] = {
-63158401, -31536001, -1, 31535999, 63071999, 94694399, 126230399,
157766399, 189302399, 220924799, 252460799, 283996799, 315532799,
347155199, 378691199, 410227199, 441763199, 473385599, 504921599,
536457599, 567993599, 599615999, 631151999, 662687999, 694223999,
725846399, 757382399, 788918399, 820454399, 852076799, 883612799,
915148799, 946684799, 978307199, 1009843199, 1041379199, 1072915199,
1104537599, 1136073599, 1167609599, 1199145599, 1230767999,
1262303999, 1293839999, 1325375999, 1356998399, 1388534399,
1420070399, 1451606399, 1483228799, 1514764799, 1546300799,
1577836799, 1609459199, 1640995199, 1672531199, 1704067199,
1735689599, 1767225599, 1798761599, 1830297599, 1861919999,
1893455999, 1924991999, 1956527999, 1988150399, 2019686399,
2051222399, 2082758399, 2114380799, 2145916799, 2177452799,
2208988799, 2240611199, 2272147199, 2303683199, 2335219199,
2366841599, 2398377599, 2429913599, 2461449599, 2493071999,
2524607999, 2556143999, 2587679999, 2619302399, 2650838399,
2682374399, 2713910399, 2745532799, 2777068799, 2808604799,
2840140799, 2871763199, 2903299199, 2934835199, 2966371199,
2997993599, 3029529599, 3061065599, 3092601599, 3124223999,
3155759999, 3187295999, 3218831999, 3250454399, 3281990399,
3313526399, 3345062399, 3376684799, 3408220799, 3439756799,
3471292799, 3502915199, 3534451199, 3565987199, 3597523199,
3629145599, 3660681599, 3692217599, 3723753599, 3755375999,
3786911999, 3818447999, 3849983999, 3881606399, 3913142399,
3944678399, 3976214399, 4007836799, 4039372799, 4070908799,
4102444799};
/**
* convert syslog timestamp to time_t
* Note: it would be better to use something similar to mktime() here.
* Unfortunately, mktime() semantics are problematic: first of all, it
* works on local time, on the machine's time zone. In syslog, we have
* to deal with multiple time zones at once, so we cannot plainly rely
* on the local zone, and so we cannot rely on mktime(). One solution would
* be to refactor all time-related functions so that they are all guarded
* by a mutex to ensure TZ consistency (which would also enable us to
* change the TZ at will for specific function calls). But that would
* potentially mean a lot of overhead.
* Also, mktime() has some side effects, at least setting of tzname. With
* a refactoring as described above that should probably not be a problem,
* but would also need more work. For some more thoughts on this topic,
* have a look here:
* http://stackoverflow.com/questions/18355101/is-standard-c-mktime-thread-safe-on-linux
* In conclusion, we keep our own code for generating the unix timestamp.
* rgerhards, 2016-03-02 (taken from rsyslog sources)
*/
static time_t
syslogTime2time_t(const int year, const int month, const int day,
const int hour, const int minute, const int second,
const int OffsetHour, const int OffsetMinute, const char OffsetMode)
{
long MonthInDays, NumberOfYears, NumberOfDays;
int utcOffset;
time_t TimeInUnixFormat;
if(year < 1970 || year > 2100) {
TimeInUnixFormat = 0;
goto done;
}
/* Counting how many Days have passed since the 01.01 of the
* selected Year (Month level), according to the selected Month*/
switch(month)
{
case 1:
MonthInDays = 0; //until 01 of January
break;
case 2:
MonthInDays = 31; //until 01 of February - leap year handling down below!
break;
case 3:
MonthInDays = 59; //until 01 of March
break;
case 4:
MonthInDays = 90; //until 01 of April
break;
case 5:
MonthInDays = 120; //until 01 of Mai
break;
case 6:
MonthInDays = 151; //until 01 of June
break;
case 7:
MonthInDays = 181; //until 01 of July
break;
case 8:
MonthInDays = 212; //until 01 of August
break;
case 9:
MonthInDays = 243; //until 01 of September
break;
case 10:
MonthInDays = 273; //until 01 of Oktober
break;
case 11:
MonthInDays = 304; //until 01 of November
break;
case 12:
MonthInDays = 334; //until 01 of December
break;
default: /* this cannot happen (and would be a program error)
* but we need the code to keep the compiler silent.
*/
MonthInDays = 0; /* any value fits ;) */
break;
}
/* adjust for leap years */
if((year % 100 != 0 && year % 4 == 0) || (year == 2000)) {
if(month > 2)
MonthInDays++;
}
/* 1) Counting how many Years have passed since 1970
2) Counting how many Days have passed since the 01.01 of the selected Year
(Day level) according to the Selected Month and Day. Last day doesn't count,
it should be until last day
3) Calculating this period (NumberOfDays) in seconds*/
NumberOfYears = year - yearInSec_startYear - 1;
NumberOfDays = MonthInDays + day - 1;
TimeInUnixFormat = (yearInSecs[NumberOfYears] + 1) + NumberOfDays * 86400;
/*Add Hours, minutes and seconds */
TimeInUnixFormat += hour*60*60;
TimeInUnixFormat += minute*60;
TimeInUnixFormat += second;
/* do UTC offset */
utcOffset = OffsetHour*3600 + OffsetMinute*60;
if(OffsetMode == '+')
utcOffset *= -1; /* if timestamp is ahead, we need to "go back" to UTC */
TimeInUnixFormat += utcOffset;
done:
return TimeInUnixFormat;
}
struct data_RFC5424Date {
enum FMT_MODE fmt_mode;
};
/**
* Parse a TIMESTAMP as specified in RFC5424 (subset of RFC3339).
*/
PARSER_Parse(RFC5424Date)
const unsigned char *pszTS;
struct data_RFC5424Date *const data = (struct data_RFC5424Date*) pdata;
/* variables to temporarily hold time information while we parse */
int year;
int month;
int day;
int hour; /* 24 hour clock */
int minute;
int second;
int secfrac; /* fractional seconds (must be 32 bit!) */
int secfracPrecision;
int OffsetHour; /* UTC offset in hours */
int OffsetMinute; /* UTC offset in minutes */
char OffsetMode;
size_t len;
size_t orglen;
/* end variables to temporarily hold time information while we parse */
pszTS = (unsigned char*) npb->str + *offs;
len = orglen = npb->strLen - *offs;
year = hParseInt(&pszTS, &len);
/* We take the liberty to accept slightly malformed timestamps e.g. in
* the format of 2003-9-1T1:0:0. */
if(len == 0 || *pszTS++ != '-') goto done;
--len;
month = hParseInt(&pszTS, &len);
if(month < 1 || month > 12) goto done;
if(len == 0 || *pszTS++ != '-')
goto done;
--len;
day = hParseInt(&pszTS, &len);
if(day < 1 || day > 31) goto done;
if(len == 0 || *pszTS++ != 'T') goto done;
--len;
hour = hParseInt(&pszTS, &len);
if(hour < 0 || hour > 23) goto done;
if(len == 0 || *pszTS++ != ':')
goto done;
--len;
minute = hParseInt(&pszTS, &len);
if(minute < 0 || minute > 59) goto done;
if(len == 0 || *pszTS++ != ':') goto done;
--len;
second = hParseInt(&pszTS, &len);
if(second < 0 || second > 60) goto done;
/* Now let's see if we have secfrac */
if(len > 0 && *pszTS == '.') {
--len;
const unsigned char *pszStart = ++pszTS;
secfrac = hParseInt(&pszTS, &len);
secfracPrecision = (int) (pszTS - pszStart);
} else {
secfracPrecision = 0;
secfrac = 0;
}
/* check the timezone */
if(len == 0) goto done;
if(*pszTS == 'Z') {
OffsetHour = 0;
OffsetMinute = 0;
OffsetMode = '+';
--len;
pszTS++; /* eat Z */
} else if((*pszTS == '+') || (*pszTS == '-')) {
OffsetMode = *pszTS;
--len;
pszTS++;
OffsetHour = hParseInt(&pszTS, &len);
if(OffsetHour < 0 || OffsetHour > 23)
goto done;
if(len == 0 || *pszTS++ != ':')
goto done;
--len;
OffsetMinute = hParseInt(&pszTS, &len);
if(OffsetMinute < 0 || OffsetMinute > 59)
goto done;
} else {
/* there MUST be TZ information */
goto done;
}
if(len > 0) {
if(*pszTS != ' ') /* if it is not a space, it can not be a "good" time */
goto done;
}
/* we had success, so update parse pointer */
*parsed = orglen - len;
if(value != NULL) {
if(data->fmt_mode == FMT_AS_STRING) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
} else {
int64_t timestamp = syslogTime2time_t(year, month, day,
hour, minute, second, OffsetHour, OffsetMinute, OffsetMode);
if(data->fmt_mode == FMT_AS_TIMESTAMP_UX_MS) {
timestamp *= 1000;
/* simulate pow(), do not use math lib! */
int div = 1;
if(secfracPrecision == 1) {
secfrac *= 100;
} else if(secfracPrecision == 2) {
secfrac *= 10;
} else if(secfracPrecision > 3) {
for(int i = 0 ; i < (secfracPrecision - 3) ; ++i)
div *= 10;
}
timestamp += secfrac / div;
}
*value = json_object_new_int64(timestamp);
}
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(RFC5424Date)
{
int r = 0;
struct data_RFC5424Date *data =
(struct data_RFC5424Date*) calloc(1, sizeof(struct data_RFC5424Date));
data->fmt_mode = FMT_AS_STRING;
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "format")) {
const char *fmtmode = json_object_get_string(val);
if(!strcmp(fmtmode, "timestamp-unix")) {
data->fmt_mode = FMT_AS_TIMESTAMP_UX;
} else if(!strcmp(fmtmode, "timestamp-unix-ms")) {
data->fmt_mode = FMT_AS_TIMESTAMP_UX_MS;
} else if(!strcmp(fmtmode, "string")) {
data->fmt_mode = FMT_AS_STRING;
} else {
ln_errprintf(ctx, 0, "invalid value for date-rfc5424:format %s",
fmtmode);
}
} else {
if(!(strcmp(key, "name") == 0 && strcmp(json_object_get_string(val), "-") == 0)) {
ln_errprintf(ctx, 0, "invalid param for date-rfc5424 %s", key);
}
}
json_object_iter_next(&it);
}
done:
*pdata = data;
return r;
}
PARSER_Destruct(RFC5424Date)
{
free(pdata);
}
struct data_RFC3164Date {
enum FMT_MODE fmt_mode;
};
/**
* Parse a RFC3164 Date.
*/
PARSER_Parse(RFC3164Date)
const unsigned char *p;
size_t len, orglen;
struct data_RFC3164Date *const data = (struct data_RFC3164Date*) pdata;
/* variables to temporarily hold time information while we parse */
int year;
int month;
int day;
int hour; /* 24 hour clock */
int minute;
int second;
p = (unsigned char*) npb->str + *offs;
orglen = len = npb->strLen - *offs;
/* If we look at the month (Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov, Dec),
* we may see the following character sequences occur:
*
* J(an/u(n/l)), Feb, Ma(r/y), A(pr/ug), Sep, Oct, Nov, Dec
*
* We will use this for parsing, as it probably is the
* fastest way to parse it.
*/
if(len < 3)
goto done;
switch(*p++)
{
case 'j':
case 'J':
if(*p == 'a' || *p == 'A') {
++p;
if(*p == 'n' || *p == 'N') {
++p;
month = 1;
} else
goto done;
} else if(*p == 'u' || *p == 'U') {
++p;
if(*p == 'n' || *p == 'N') {
++p;
month = 6;
} else if(*p == 'l' || *p == 'L') {
++p;
month = 7;
} else
goto done;
} else
goto done;
break;
case 'f':
case 'F':
if(*p == 'e' || *p == 'E') {
++p;
if(*p == 'b' || *p == 'B') {
++p;
month = 2;
} else
goto done;
} else
goto done;
break;
case 'm':
case 'M':
if(*p == 'a' || *p == 'A') {
++p;
if(*p == 'r' || *p == 'R') {
++p;
month = 3;
} else if(*p == 'y' || *p == 'Y') {
++p;
month = 5;
} else
goto done;
} else
goto done;
break;
case 'a':
case 'A':
if(*p == 'p' || *p == 'P') {
++p;
if(*p == 'r' || *p == 'R') {
++p;
month = 4;
} else
goto done;
} else if(*p == 'u' || *p == 'U') {
++p;
if(*p == 'g' || *p == 'G') {
++p;
month = 8;
} else
goto done;
} else
goto done;
break;
case 's':
case 'S':
if(*p == 'e' || *p == 'E') {
++p;
if(*p == 'p' || *p == 'P') {
++p;
month = 9;
} else
goto done;
} else
goto done;
break;
case 'o':
case 'O':
if(*p == 'c' || *p == 'C') {
++p;
if(*p == 't' || *p == 'T') {
++p;
month = 10;
} else
goto done;
} else
goto done;
break;
case 'n':
case 'N':
if(*p == 'o' || *p == 'O') {
++p;
if(*p == 'v' || *p == 'V') {
++p;
month = 11;
} else
goto done;
} else
goto done;
break;
case 'd':
case 'D':
if(*p == 'e' || *p == 'E') {
++p;
if(*p == 'c' || *p == 'C') {
++p;
month = 12;
} else
goto done;
} else
goto done;
break;
default:
goto done;
}
len -= 3;
/* done month */
if(len == 0 || *p++ != ' ')
goto done;
--len;
/* we accept a slightly malformed timestamp with one-digit days. */
if(*p == ' ') {
--len;
++p;
}
day = hParseInt(&p, &len);
if(day < 1 || day > 31)
goto done;
if(len == 0 || *p++ != ' ')
goto done;
--len;
/* time part */
hour = hParseInt(&p, &len);
if(hour > 1970 && hour < 2100) {
/* if so, we assume this actually is a year. This is a format found
* e.g. in Cisco devices.
*
year = hour;
*/
/* re-query the hour, this time it must be valid */
if(len == 0 || *p++ != ' ')
goto done;
--len;
hour = hParseInt(&p, &len);
}
if(hour < 0 || hour > 23)
goto done;
if(len == 0 || *p++ != ':')
goto done;
--len;
minute = hParseInt(&p, &len);
if(minute < 0 || minute > 59)
goto done;
if(len == 0 || *p++ != ':')
goto done;
--len;
second = hParseInt(&p, &len);
if(second < 0 || second > 60)
goto done;
/* we provide support for an extra ":" after the date. While this is an
* invalid format, it occurs frequently enough (e.g. with Cisco devices)
* to permit it as a valid case. -- rgerhards, 2008-09-12
*/
if(len > 0 && *p == ':') {
++p; /* just skip past it */
--len;
}
/* we had success, so update parse pointer */
*parsed = orglen - len;
if(value != NULL) {
if(data->fmt_mode == FMT_AS_STRING) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
} else {
/* we assume year == current year, so let's obtain current year */
struct tm tm;
const time_t curr = time(NULL);
gmtime_r(&curr, &tm);
year = tm.tm_year + 1900;
int64_t timestamp = syslogTime2time_t(year, month, day,
hour, minute, second, 0, 0, '+');
if(data->fmt_mode == FMT_AS_TIMESTAMP_UX_MS) {
/* we do not have more precise info, just bring
* into common format!
*/
timestamp *= 1000;
}
*value = json_object_new_int64(timestamp);
}
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(RFC3164Date)
{
int r = 0;
struct data_RFC3164Date *data = (struct data_RFC3164Date*) calloc(1, sizeof(struct data_RFC3164Date));
data->fmt_mode = FMT_AS_STRING;
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "format")) {
const char *fmtmode = json_object_get_string(val);
if(!strcmp(fmtmode, "timestamp-unix")) {
data->fmt_mode = FMT_AS_TIMESTAMP_UX;
} else if(!strcmp(fmtmode, "timestamp-unix-ms")) {
data->fmt_mode = FMT_AS_TIMESTAMP_UX_MS;
} else if(!strcmp(fmtmode, "string")) {
data->fmt_mode = FMT_AS_STRING;
} else {
ln_errprintf(ctx, 0, "invalid value for date-rfc3164:format %s",
fmtmode);
}
} else {
if(!(strcmp(key, "name") == 0 && strcmp(json_object_get_string(val), "-") == 0)) {
ln_errprintf(ctx, 0, "invalid param for date-rfc3164 %s", key);
}
}
json_object_iter_next(&it);
}
done:
*pdata = data;
return r;
}
PARSER_Destruct(RFC3164Date)
{
free(pdata);
}
struct data_Number {
int64_t maxval;
enum FMT_MODE fmt_mode;
};
/**
* Parse a Number.
* Note that a number is an abstracted concept. We always represent it
* as 64 bits (but may later change our mind if performance dictates so).
*/
PARSER_Parse(Number)
const char *c;
size_t i;
int64_t val = 0;
struct data_Number *const data = (struct data_Number*) pdata;
enum FMT_MODE fmt_mode = FMT_AS_STRING;
int64_t maxval = 0;
if(data != NULL) {
fmt_mode = data->fmt_mode;
maxval = data->maxval;
}
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
for (i = *offs; i < npb->strLen && myisdigit(c[i]); i++)
val = val * 10 + c[i] - '0';
if(maxval > 0 && val > maxval) {
LN_DBGPRINTF(npb->ctx, "number parser: val too large (max %" PRIu64
", actual %" PRIu64 ")",
maxval, val);
goto done;
}
if (i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
if(fmt_mode == FMT_AS_STRING) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
} else {
*value = json_object_new_int64(val);
}
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(Number)
{
int r = 0;
struct data_Number *data = (struct data_Number*) calloc(1, sizeof(struct data_Number));
data->fmt_mode = FMT_AS_STRING;
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "maxval")) {
errno = 0;
data->maxval = json_object_get_int64(val);
if(errno != 0) {
ln_errprintf(ctx, errno, "param 'maxval' must be integer but is: %s",
json_object_to_json_string(val));
}
} else if(!strcmp(key, "format")) {
const char *fmtmode = json_object_get_string(val);
if(!strcmp(fmtmode, "number")) {
data->fmt_mode = FMT_AS_NUMBER;
} else if(!strcmp(fmtmode, "string")) {
data->fmt_mode = FMT_AS_STRING;
} else {
ln_errprintf(ctx, 0, "invalid value for number:format %s",
fmtmode);
}
} else {
if(!(strcmp(key, "name") == 0 && strcmp(json_object_get_string(val), "-") == 0)) {
ln_errprintf(ctx, 0, "invalid param for number: %s", key);
}
}
json_object_iter_next(&it);
}
done:
*pdata = data;
return r;
}
PARSER_Destruct(Number)
{
free(pdata);
}
struct data_Float {
enum FMT_MODE fmt_mode;
};
/**
* Parse a Real-number in floating-pt form.
*/
PARSER_Parse(Float)
const char *c;
size_t i;
const struct data_Float *const data = (struct data_Float*) pdata;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
int isNeg = 0;
double val = 0;
int seen_point = 0;
double frac = 10;
i = *offs;
if (c[i] == '-') {
isNeg = 1;
i++;
}
for (; i < npb->strLen; i++) {
if (c[i] == '.') {
if (seen_point != 0)
break;
seen_point = 1;
} else if (myisdigit(c[i])) {
if(seen_point) {
val += (c[i] - '0') / frac;
frac *= 10;
} else {
val = val * 10 + c[i] - '0';
}
} else {
break;
}
}
if (i == *offs)
goto done;
if(isNeg)
val *= -1;
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
if(data->fmt_mode == FMT_AS_STRING) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
} else {
char *serialized = strndup(npb->str+(*offs), *parsed);
*value = json_object_new_double_s(val, serialized);
free(serialized);
}
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(Float)
{
int r = 0;
struct data_Float *data = (struct data_Float*) calloc(1, sizeof(struct data_Float));
data->fmt_mode = FMT_AS_STRING;
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "format")) {
const char *fmtmode = json_object_get_string(val);
if(!strcmp(fmtmode, "number")) {
data->fmt_mode = FMT_AS_NUMBER;
} else if(!strcmp(fmtmode, "string")) {
data->fmt_mode = FMT_AS_STRING;
} else {
ln_errprintf(ctx, 0, "invalid value for float:format %s",
fmtmode);
}
} else {
if(!(strcmp(key, "name") == 0 && strcmp(json_object_get_string(val), "-") == 0)) {
ln_errprintf(ctx, 0, "invalid param for float: %s", key);
}
}
json_object_iter_next(&it);
}
done:
*pdata = data;
return r;
}
PARSER_Destruct(Float)
{
free(pdata);
}
struct data_HexNumber {
uint64_t maxval;
enum FMT_MODE fmt_mode;
};
/**
* Parse a hex Number.
* A hex number begins with 0x and contains only hex digits until the terminating
* whitespace. Note that if a non-hex character is deteced inside the number string,
* this is NOT considered to be a number.
*/
PARSER_Parse(HexNumber)
const char *c;
size_t i = *offs;
struct data_HexNumber *const data = (struct data_HexNumber*) pdata;
uint64_t maxval = data->maxval;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
if(c[i] != '0' || c[i+1] != 'x')
goto done;
uint64_t val = 0;
for (i += 2 ; i < npb->strLen && isxdigit(c[i]); i++) {
const char digit = tolower(c[i]);
val *= 16;
if(digit >= 'a' && digit <= 'f')
val += digit - 'a' + 10;
else
val += digit - '0';
}
if (i == *offs || !isspace(c[i]))
goto done;
if(maxval > 0 && val > maxval) {
LN_DBGPRINTF(npb->ctx, "hexnumber parser: val too large (max %" PRIu64
", actual %" PRIu64 ")",
maxval, val);
goto done;
}
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
if(data->fmt_mode == FMT_AS_STRING) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
} else {
*value = json_object_new_int64((int64_t) val);
}
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(HexNumber)
{
int r = 0;
struct data_HexNumber *data = (struct data_HexNumber*) calloc(1, sizeof(struct data_HexNumber));
data->fmt_mode = FMT_AS_STRING;
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "maxval")) {
errno = 0;
data->maxval = json_object_get_int64(val);
if(errno != 0) {
ln_errprintf(ctx, errno, "param 'maxval' must be integer but is: %s",
json_object_to_json_string(val));
}
} else if(!strcmp(key, "format")) {
const char *fmtmode = json_object_get_string(val);
if(!strcmp(fmtmode, "number")) {
data->fmt_mode = FMT_AS_NUMBER;
} else if(!strcmp(fmtmode, "string")) {
data->fmt_mode = FMT_AS_STRING;
} else {
ln_errprintf(ctx, 0, "invalid value for hexnumber:format %s",
fmtmode);
}
} else {
if(!(strcmp(key, "name") == 0 && strcmp(json_object_get_string(val), "-") == 0)) {
ln_errprintf(ctx, 0, "invalid param for hexnumber: %s", key);
}
}
json_object_iter_next(&it);
}
done:
*pdata = data;
return r;
}
PARSER_Destruct(HexNumber)
{
free(pdata);
}
/**
* Parse a kernel timestamp.
* This is a fixed format, see
* https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/kernel/printk/printk.c?id=refs/tags/v4.0#n1011
* This is the code that generates it:
* sprintf(buf, "[%5lu.%06lu] ", (unsigned long)ts, rem_nsec / 1000);
* We accept up to 12 digits for ts, everything above that for sure is
* no timestamp.
*/
#define LEN_KERNEL_TIMESTAMP 14
PARSER_Parse(KernelTimestamp)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(c[i] != '[' || i+LEN_KERNEL_TIMESTAMP > npb->strLen
|| !myisdigit(c[i+1])
|| !myisdigit(c[i+2])
|| !myisdigit(c[i+3])
|| !myisdigit(c[i+4])
|| !myisdigit(c[i+5])
)
goto done;
i += 6;
for(int j = 0 ; j < 7 && i < npb->strLen && myisdigit(c[i]) ; )
++i, ++j; /* just scan */
if(i >= npb->strLen || c[i] != '.')
goto done;
++i; /* skip over '.' */
if( i+7 > npb->strLen
|| !myisdigit(c[i+0])
|| !myisdigit(c[i+1])
|| !myisdigit(c[i+2])
|| !myisdigit(c[i+3])
|| !myisdigit(c[i+4])
|| !myisdigit(c[i+5])
|| c[i+6] != ']'
)
goto done;
i += 7;
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/**
* Parse whitespace.
* This parses all whitespace until the first non-whitespace character
* is found. This is primarily a tool to skip to the next "word" if
* the exact number of whitspace characters (and type of whitespace)
* is not known. The current parsing position MUST be on a whitspace,
* else the parser does not match.
* This parser is also a forward-compatibility tool for the upcoming
* slsa (simple log structure analyser) tool.
*/
PARSER_Parse(Whitespace)
const char *c;
size_t i = *offs;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
if(!isspace(c[i]))
goto done;
for (i++ ; i < npb->strLen && isspace(c[i]); i++);
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/**
* Parse a word.
* A word is a SP-delimited entity. The parser always works, except if
* the offset is position on a space upon entry.
*/
PARSER_Parse(Word)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
/* search end of word */
while(i < npb->strLen && c[i] != ' ')
i++;
if(i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
struct data_StringTo {
const char *toFind;
size_t len;
};
/**
* Parse everything up to a specific string.
* swisskid, 2015-01-21
*/
PARSER_Parse(StringTo)
const char *c;
size_t i, j, m;
int chkstr;
struct data_StringTo *const data = (struct data_StringTo*) pdata;
const char *const toFind = data->toFind;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
chkstr = 0;
/* Total hunt for letter */
while(chkstr == 0 && i < npb->strLen ) {
i++;
if(c[i] == toFind[0]) {
/* Found the first letter, now find the rest of the string */
j = 1;
m = i+1;
while(m < npb->strLen && j < data->len ) {
if(c[m] != toFind[j])
break;
if(j == data->len - 1) { /* full match? */
chkstr = 1;
break;
}
j++;
m++;
}
}
}
if(i == *offs || i == npb->strLen || chkstr != 1)
goto done;
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(StringTo)
{
int r = 0;
struct data_StringTo *data = (struct data_StringTo*) calloc(1, sizeof(struct data_StringTo));
struct json_object *ed;
if(json_object_object_get_ex(json, "extradata", &ed) == 0) {
ln_errprintf(ctx, 0, "string-to type needs 'extradata' parameter");
r = LN_BADCONFIG ;
goto done;
}
data->toFind = strdup(json_object_get_string(ed));
data->len = strlen(data->toFind);
*pdata = data;
done:
if(r != 0)
free(data);
return r;
}
PARSER_Destruct(StringTo)
{
struct data_StringTo *data = (struct data_StringTo*) pdata;
free((void*)data->toFind);
free(pdata);
}
/**
* Parse a alphabetic word.
* A alpha word is composed of characters for which isalpha returns true.
* The parser dones if there is no alpha character at all.
*/
PARSER_Parse(Alpha)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
/* search end of word */
while(i < npb->strLen && isalpha(c[i]))
i++;
if(i == *offs) {
goto done;
}
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
struct data_CharTo {
char *term_chars;
size_t n_term_chars;
char *data_for_display;
};
/**
* Parse everything up to a specific character.
* The character must be the only char inside extra data passed to the parser.
* It is considered a format error if
* a) the to-be-parsed buffer is already positioned on the terminator character
* b) there is no terminator until the end of the buffer
* In those cases, the parsers declares itself as not being successful, in all
* other cases a string is extracted.
*/
PARSER_Parse(CharTo)
size_t i;
struct data_CharTo *const data = (struct data_CharTo*) pdata;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
i = *offs;
/* search end of word */
int found = 0;
while(i < npb->strLen && !found) {
for(size_t j = 0 ; j < data->n_term_chars ; ++j) {
if(npb->str[i] == data->term_chars[j]) {
found = 1;
break;
}
}
if(!found)
++i;
}
if(i == *offs || i == npb->strLen || !found)
goto done;
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0;
done:
return r;
}
PARSER_Construct(CharTo)
{
int r = 0;
LN_DBGPRINTF(ctx, "in parser_construct charTo");
struct data_CharTo *data = (struct data_CharTo*) calloc(1, sizeof(struct data_CharTo));
struct json_object *ed;
if(json_object_object_get_ex(json, "extradata", &ed) == 0) {
ln_errprintf(ctx, 0, "char-to type needs 'extradata' parameter");
r = LN_BADCONFIG ;
goto done;
}
data->term_chars = strdup(json_object_get_string(ed));
data->n_term_chars = strlen(data->term_chars);
*pdata = data;
done:
if(r != 0)
free(data);
return r;
}
PARSER_DataForDisplay(CharTo)
{
struct data_CharTo *data = (struct data_CharTo*) pdata;
if(data->data_for_display == NULL) {
data->data_for_display = malloc(8+data->n_term_chars+2);
if(data->data_for_display != NULL) {
memcpy(data->data_for_display, "char-to{", 8);
size_t i, j;
for(j = 0, i = 8 ; j < data->n_term_chars ; ++j, ++i) {
data->data_for_display[i] = data->term_chars[j];
}
data->data_for_display[i++] = '}';
data->data_for_display[i] = '\0';
}
}
return (data->data_for_display == NULL ) ? "malloc error" : data->data_for_display;
}
PARSER_Destruct(CharTo)
{
struct data_CharTo *const data = (struct data_CharTo*) pdata;
free(data->data_for_display);
free(data->term_chars);
free(pdata);
}
struct data_Literal {
const char *lit;
const char *json_conf;
};
/**
* Parse a specific literal.
*/
PARSER_Parse(Literal)
struct data_Literal *const data = (struct data_Literal*) pdata;
const char *const lit = data->lit;
size_t i = *offs;
size_t j;
for(j = 0 ; i < npb->strLen ; ++j) {
if(lit[j] != npb->str[i])
break;
++i;
}
*parsed = j; /* we must always return how far we parsed! */
if(lit[j] == '\0') {
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0;
}
return r;
}
PARSER_DataForDisplay(Literal)
{
struct data_Literal *data = (struct data_Literal*) pdata;
return data->lit;
}
PARSER_JsonConf(Literal)
{
struct data_Literal *data = (struct data_Literal*) pdata;
return data->json_conf;
}
PARSER_Construct(Literal)
{
int r = 0;
struct data_Literal *data = (struct data_Literal*) calloc(1, sizeof(struct data_Literal));
struct json_object *text;
if(json_object_object_get_ex(json, "text", &text) == 0) {
ln_errprintf(ctx, 0, "literal type needs 'text' parameter");
r = LN_BADCONFIG ;
goto done;
}
data->lit = strdup(json_object_get_string(text));
data->json_conf = strdup(json_object_to_json_string(json));
*pdata = data;
done:
if(r != 0)
free(data);
return r;
}
PARSER_Destruct(Literal)
{
struct data_Literal *data = (struct data_Literal*) pdata;
free((void*)data->lit);
free((void*)data->json_conf);
free(pdata);
}
/* for path compaction, we need a special handler to combine two
* literal data elements.
*/
int
ln_combineData_Literal(void *const porg, void *const padd)
{
struct data_Literal *const __restrict__ org = porg;
struct data_Literal *const __restrict__ add = padd;
int r = 0;
const size_t len = strlen(org->lit);
const size_t add_len = strlen(add->lit);
char *const newlit = (char*)realloc((void*)org->lit, len+add_len+1);
CHKN(newlit);
org->lit = newlit;
memcpy((char*)org->lit+len, add->lit, add_len+1);
done: return r;
}
struct data_CharSeparated {
char *term_chars;
size_t n_term_chars;
};
/**
* Parse everything up to a specific character, or up to the end of string.
* The character must be the only char inside extra data passed to the parser.
* This parser always returns success.
* By nature of the parser, it is required that end of string or the separator
* follows this field in rule.
*/
PARSER_Parse(CharSeparated)
struct data_CharSeparated *const data = (struct data_CharSeparated*) pdata;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
i = *offs;
/* search end of word */
int found = 0;
while(i < npb->strLen && !found) {
for(size_t j = 0 ; j < data->n_term_chars ; ++j) {
if(npb->str[i] == data->term_chars[j]) {
found = 1;
break;
}
}
if(!found)
++i;
}
/* success, persist */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
return r;
}
PARSER_Construct(CharSeparated)
{
int r = 0;
struct data_CharSeparated *data = (struct data_CharSeparated*) calloc(1, sizeof(struct data_CharSeparated));
struct json_object *ed;
if(json_object_object_get_ex(json, "extradata", &ed) == 0) {
ln_errprintf(ctx, 0, "char-separated type needs 'extradata' parameter");
r = LN_BADCONFIG ;
goto done;
}
data->term_chars = strdup(json_object_get_string(ed));
data->n_term_chars = strlen(data->term_chars);
*pdata = data;
done:
if(r != 0)
free(data);
return r;
}
PARSER_Destruct(CharSeparated)
{
struct data_CharSeparated *const data = (struct data_CharSeparated*) pdata;
free(data->term_chars);
free(pdata);
}
/**
* Just get everything till the end of string.
*/
PARSER_Parse(Rest)
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
/* silence the warning about unused variable */
(void)npb->str;
/* success, persist */
*parsed = npb->strLen - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0;
return r;
}
/**
* Parse a possibly quoted string. In this initial implementation, escaping of the quote
* char is not supported. A quoted string is one start starts with a double quote,
* has some text (not containing double quotes) and ends with the first double
* quote character seen. The extracted string does NOT include the quote characters.
* swisskid, 2015-01-21
*/
PARSER_Parse(OpQuotedString)
const char *c;
size_t i;
char *cstr = NULL;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(c[i] != '"') {
while(i < npb->strLen && c[i] != ' ')
i++;
if(i == *offs)
goto done;
/* success, persist */
*parsed = i - *offs;
/* create JSON value to save quoted string contents */
CHKN(cstr = strndup((char*)c + *offs, *parsed));
} else {
++i;
/* search end of string */
while(i < npb->strLen && c[i] != '"')
i++;
if(i == npb->strLen || c[i] != '"')
goto done;
/* success, persist */
*parsed = i + 1 - *offs; /* "eat" terminal double quote */
/* create JSON value to save quoted string contents */
CHKN(cstr = strndup((char*)c + *offs + 1, *parsed - 2));
}
CHKN(*value = json_object_new_string(cstr));
r = 0; /* success */
done:
free(cstr);
return r;
}
/**
* Parse a quoted string. In this initial implementation, escaping of the quote
* char is not supported. A quoted string is one start starts with a double quote,
* has some text (not containing double quotes) and ends with the first double
* quote character seen. The extracted string does NOT include the quote characters.
* rgerhards, 2011-01-14
*/
PARSER_Parse(QuotedString)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(i + 2 > npb->strLen)
goto done; /* needs at least 2 characters */
if(c[i] != '"')
goto done;
++i;
/* search end of string */
while(i < npb->strLen && c[i] != '"')
i++;
if(i == npb->strLen || c[i] != '"')
goto done;
/* success, persist */
*parsed = i + 1 - *offs; /* "eat" terminal double quote */
/* create JSON value to save quoted string contents */
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/**
* Parse an ISO date, that is YYYY-MM-DD (exactly this format).
* Note: we do manual loop unrolling -- this is fast AND efficient.
* rgerhards, 2011-01-14
*/
PARSER_Parse(ISODate)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(*offs+10 > npb->strLen)
goto done; /* if it is not 10 chars, it can't be an ISO date */
/* year */
if(!myisdigit(c[i])) goto done;
if(!myisdigit(c[i+1])) goto done;
if(!myisdigit(c[i+2])) goto done;
if(!myisdigit(c[i+3])) goto done;
if(c[i+4] != '-') goto done;
/* month */
if(c[i+5] == '0') {
if(c[i+6] < '1' || c[i+6] > '9') goto done;
} else if(c[i+5] == '1') {
if(c[i+6] < '0' || c[i+6] > '2') goto done;
} else {
goto done;
}
if(c[i+7] != '-') goto done;
/* day */
if(c[i+8] == '0') {
if(c[i+9] < '1' || c[i+9] > '9') goto done;
} else if(c[i+8] == '1' || c[i+8] == '2') {
if(!myisdigit(c[i+9])) goto done;
} else if(c[i+8] == '3') {
if(c[i+9] != '0' && c[i+9] != '1') goto done;
} else {
goto done;
}
/* success, persist */
*parsed = 10;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/**
* Parse a Cisco interface spec. Sample for such a spec are:
* outside:192.168.52.102/50349
* inside:192.168.1.15/56543 (192.168.1.112/54543)
* outside:192.168.1.13/50179 (192.168.1.13/50179)(LOCAL\some.user)
* outside:192.168.1.25/41850(LOCAL\RG-867G8-DEL88D879BBFFC8)
* inside:192.168.1.25/53 (192.168.1.25/53) (some.user)
* 192.168.1.15/0(LOCAL\RG-867G8-DEL88D879BBFFC8)
* From this, we conclude the format is:
* [interface:]ip/port [SP (ip2/port2)] [[SP](username)]
* In order to match, this syntax must start on a non-whitespace char
* other than colon.
*/
PARSER_Parse(CiscoInterfaceSpec)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(c[i] == ':' || isspace(c[i])) goto done;
/* first, check if we have an interface. We do this by trying
* to detect if we have an IP. If we have, obviously no interface
* is present. Otherwise, we check if we have a valid interface.
*/
int bHaveInterface = 0;
size_t idxInterface = 0;
size_t lenInterface = 0;
int bHaveIP = 0;
size_t lenIP;
size_t idxIP = i;
if(ln_v2_parseIPv4(npb, &i, NULL, &lenIP, NULL) == 0) {
bHaveIP = 1;
i += lenIP - 1; /* position on delimiter */
} else {
idxInterface = i;
while(i < npb->strLen) {
if(isspace(c[i])) goto done;
if(c[i] == ':')
break;
++i;
}
lenInterface = i - idxInterface;
bHaveInterface = 1;
}
if(i == npb->strLen) goto done;
++i; /* skip over colon */
/* we now utilize our other parser helpers */
if(!bHaveIP) {
idxIP = i;
if(ln_v2_parseIPv4(npb, &i, NULL, &lenIP, NULL) != 0) goto done;
i += lenIP;
}
if(i == npb->strLen || c[i] != '/') goto done;
++i; /* skip slash */
const size_t idxPort = i;
size_t lenPort;
if(ln_v2_parseNumber(npb, &i, NULL, &lenPort, NULL) != 0) goto done;
i += lenPort;
/* check if optional second ip/port is present
* We assume we must at least have 5 chars [" (::1)"]
*/
int bHaveIP2 = 0;
size_t idxIP2 = 0, lenIP2 = 0;
size_t idxPort2 = 0, lenPort2 = 0;
if(i+5 < npb->strLen && c[i] == ' ' && c[i+1] == '(') {
size_t iTmp = i+2; /* skip over " (" */
idxIP2 = iTmp;
if(ln_v2_parseIPv4(npb, &iTmp, NULL, &lenIP2, NULL) == 0) {
iTmp += lenIP2;
if(i < npb->strLen || c[iTmp] == '/') {
++iTmp; /* skip slash */
idxPort2 = iTmp;
if(ln_v2_parseNumber(npb, &iTmp, NULL, &lenPort2, NULL) == 0) {
iTmp += lenPort2;
if(iTmp < npb->strLen && c[iTmp] == ')') {
i = iTmp + 1; /* match, so use new index */
bHaveIP2 = 1;
}
}
}
}
}
/* check if optional username is present
* We assume we must at least have 3 chars ["(n)"]
*/
int bHaveUser = 0;
size_t idxUser = 0;
size_t lenUser = 0;
if( (i+2 < npb->strLen && c[i] == '(' && !isspace(c[i+1]) )
|| (i+3 < npb->strLen && c[i] == ' ' && c[i+1] == '(' && !isspace(c[i+2])) ) {
idxUser = i + ((c[i] == ' ') ? 2 : 1); /* skip [SP]'(' */
size_t iTmp = idxUser;
while(iTmp < npb->strLen && !isspace(c[iTmp]) && c[iTmp] != ')')
++iTmp; /* just scan */
if(iTmp < npb->strLen && c[iTmp] == ')') {
i = iTmp + 1; /* we have a match, so use new index */
bHaveUser = 1;
lenUser = iTmp - idxUser;
}
}
/* all done, save data */
if(value == NULL)
goto success;
CHKN(*value = json_object_new_object());
json_object *json;
if(bHaveInterface) {
CHKN(json = json_object_new_string_len(c+idxInterface, lenInterface));
json_object_object_add_ex(*value, "interface", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
CHKN(json = json_object_new_string_len(c+idxIP, lenIP));
json_object_object_add_ex(*value, "ip", json, JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
CHKN(json = json_object_new_string_len(c+idxPort, lenPort));
json_object_object_add_ex(*value, "port", json, JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
if(bHaveIP2) {
CHKN(json = json_object_new_string_len(c+idxIP2, lenIP2));
json_object_object_add_ex(*value, "ip2", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
CHKN(json = json_object_new_string_len(c+idxPort2, lenPort2));
json_object_object_add_ex(*value, "port2", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
if(bHaveUser) {
CHKN(json = json_object_new_string_len(c+idxUser, lenUser));
json_object_object_add_ex(*value, "user", json,
JSON_C_OBJECT_ADD_KEY_IS_NEW|JSON_C_OBJECT_KEY_IS_CONSTANT);
}
success: /* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
*value = NULL; /* to be on the save side */
}
return r;
}
/**
* Parse a duration. A duration is similar to a timestamp, except that
* it tells about time elapsed. As such, hours can be larger than 23
* and hours may also be specified by a single digit (this, for example,
* is commonly done in Cisco software).
* Note: we do manual loop unrolling -- this is fast AND efficient.
*/
PARSER_Parse(Duration)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
/* hour is a bit tricky */
if(!myisdigit(c[i])) goto done;
++i;
if(myisdigit(c[i]))
++i;
if(c[i] == ':')
++i;
else
goto done;
if(i+5 > npb->strLen)
goto done;/* if it is not 5 chars from here, it can't be us */
if(c[i] < '0' || c[i] > '5') goto done;
if(!myisdigit(c[i+1])) goto done;
if(c[i+2] != ':') goto done;
if(c[i+3] < '0' || c[i+3] > '5') goto done;
if(!myisdigit(c[i+4])) goto done;
/* success, persist */
*parsed = (i + 5) - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/**
* Parse a timestamp in 24hr format (exactly HH:MM:SS).
* Note: we do manual loop unrolling -- this is fast AND efficient.
* rgerhards, 2011-01-14
*/
PARSER_Parse(Time24hr)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(*offs+8 > npb->strLen)
goto done; /* if it is not 8 chars, it can't be us */
/* hour */
if(c[i] == '0' || c[i] == '1') {
if(!myisdigit(c[i+1])) goto done;
} else if(c[i] == '2') {
if(c[i+1] < '0' || c[i+1] > '3') goto done;
} else {
goto done;
}
/* TODO: the code below is a duplicate of 24hr parser - create common function */
if(c[i+2] != ':') goto done;
if(c[i+3] < '0' || c[i+3] > '5') goto done;
if(!myisdigit(c[i+4])) goto done;
if(c[i+5] != ':') goto done;
if(c[i+6] < '0' || c[i+6] > '5') goto done;
if(!myisdigit(c[i+7])) goto done;
/* success, persist */
*parsed = 8;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/**
* Parse a timestamp in 12hr format (exactly HH:MM:SS).
* Note: we do manual loop unrolling -- this is fast AND efficient.
* TODO: the code below is a duplicate of 24hr parser - create common function?
* rgerhards, 2011-01-14
*/
PARSER_Parse(Time12hr)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
c = npb->str;
i = *offs;
if(*offs+8 > npb->strLen)
goto done; /* if it is not 8 chars, it can't be us */
/* hour */
if(c[i] == '0') {
if(!myisdigit(c[i+1])) goto done;
} else if(c[i] == '1') {
if(c[i+1] < '0' || c[i+1] > '2') goto done;
} else {
goto done;
}
if(c[i+2] != ':') goto done;
if(c[i+3] < '0' || c[i+3] > '5') goto done;
if(!myisdigit(c[i+4])) goto done;
if(c[i+5] != ':') goto done;
if(c[i+6] < '0' || c[i+6] > '5') goto done;
if(!myisdigit(c[i+7])) goto done;
/* success, persist */
*parsed = 8;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/* helper to IPv4 address parser, checks the next set of numbers.
* Syntax 1 to 3 digits, value together not larger than 255.
* @param[in] npb->str parse buffer
* @param[in/out] offs offset into buffer, updated if successful
* @return 0 if OK, 1 otherwise
*/
static int
chkIPv4AddrByte(npb_t *const npb, size_t *offs)
{
int val = 0;
int r = 1; /* default: done -- simplifies things */
const char *c;
size_t i = *offs;
c = npb->str;
if(i == npb->strLen || !myisdigit(c[i]))
goto done;
val = c[i++] - '0';
if(i < npb->strLen && myisdigit(c[i])) {
val = val * 10 + c[i++] - '0';
if(i < npb->strLen && myisdigit(c[i]))
val = val * 10 + c[i++] - '0';
}
if(val > 255) /* cannot be a valid IP address byte! */
goto done;
*offs = i;
r = 0;
done:
return r;
}
/**
* Parser for IPv4 addresses.
*/
PARSER_Parse(IPv4)
const char *c;
size_t i;
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
i = *offs;
if(i + 7 > npb->strLen) {
/* IPv4 addr requires at least 7 characters */
goto done;
}
c = npb->str;
/* byte 1*/
if(chkIPv4AddrByte(npb, &i) != 0) goto done;
if(i == npb->strLen || c[i++] != '.') goto done;
/* byte 2*/
if(chkIPv4AddrByte(npb, &i) != 0) goto done;
if(i == npb->strLen || c[i++] != '.') goto done;
/* byte 3*/
if(chkIPv4AddrByte(npb, &i) != 0) goto done;
if(i == npb->strLen || c[i++] != '.') goto done;
/* byte 4 - we do NOT need any char behind it! */
if(chkIPv4AddrByte(npb, &i) != 0) goto done;
/* if we reach this point, we found a valid IP address */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/* skip past the IPv6 address block, parse pointer is set to
* first char after the block. Returns an error if already at end
* of string.
* @param[in] npb->str parse buffer
* @param[in/out] offs offset into buffer, updated if successful
* @return 0 if OK, 1 otherwise
*/
static int
skipIPv6AddrBlock(npb_t *const npb,
size_t *const __restrict__ offs)
{
int j;
if(*offs == npb->strLen)
return 1;
for(j = 0 ; j < 4 && *offs+j < npb->strLen && isxdigit(npb->str[*offs+j]) ; ++j)
/*just skip*/ ;
*offs += j;
return 0;
}
/**
* Parser for IPv6 addresses.
* Bases on RFC4291 Section 2.2. The address must be followed
* by whitespace or end-of-string, else it is not considered
* a valid address. This prevents false positives.
*/
PARSER_Parse(IPv6)
const char *c;
size_t i;
size_t beginBlock; /* last block begin in case we need IPv4 parsing */
int hasIPv4 = 0;
int nBlocks = 0; /* how many blocks did we already have? */
int bHad0Abbrev = 0; /* :: already used? */
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
i = *offs;
if(i + 2 > npb->strLen) {
/* IPv6 addr requires at least 2 characters ("::") */
goto done;
}
c = npb->str;
/* check that first block is non-empty */
if(! ( isxdigit(c[i]) || (c[i] == ':' && c[i+1] == ':') ) )
goto done;
/* try for all potential blocks plus one more (so we see errors!) */
for(int j = 0 ; j < 9 ; ++j) {
beginBlock = i;
if(skipIPv6AddrBlock(npb, &i) != 0) goto done;
nBlocks++;
if(i == npb->strLen) goto chk_ok;
if(isspace(c[i])) goto chk_ok;
if(c[i] == '.'){ /* IPv4 processing! */
hasIPv4 = 1;
break;
}
if(c[i] != ':') goto done;
i++; /* "eat" ':' */
if(i == npb->strLen) goto chk_ok;
/* check for :: */
if(bHad0Abbrev) {
if(c[i] == ':') goto done;
} else {
if(c[i] == ':') {
bHad0Abbrev = 1;
++i;
if(i == npb->strLen) goto chk_ok;
}
}
}
if(hasIPv4) {
size_t ipv4_parsed;
--nBlocks;
/* prevent pure IPv4 address to be recognized */
if(beginBlock == *offs) goto done;
i = beginBlock;
if(ln_v2_parseIPv4(npb, &i, NULL, &ipv4_parsed, NULL) != 0)
goto done;
i += ipv4_parsed;
}
chk_ok: /* we are finished parsing, check if things are ok */
if(nBlocks > 8) goto done;
if(bHad0Abbrev && nBlocks >= 8) goto done;
/* now check if trailing block is missing. Note that i is already
* on next character, so we need to go two back. Two are always
* present, else we would not reach this code here.
*/
if(c[i-1] == ':' && c[i-2] != ':') goto done;
/* if we reach this point, we found a valid IP address */
*parsed = i - *offs;
if(value != NULL) {
*value = json_object_new_string_len(npb->str+(*offs), *parsed);
}
r = 0; /* success */
done:
return r;
}
/* check if a char is valid inside a name of the iptables motif.
* We try to keep the set as slim as possible, because the iptables
* parser may otherwise create a very broad match (especially the
* inclusion of simple words like "DF" cause grief here).
* Note: we have taken the permitted set from iptables log samples.
* Report bugs if we missed some additional rules.
*/
static inline int
isValidIPTablesNameChar(const char c)
{
/* right now, upper case only is valid */
return ('A' <= c && c <= 'Z') ? 1 : 0;
}
/* helper to iptables parser, parses out a a single name=value pair
*/
static int
parseIPTablesNameValue(npb_t *const npb,
size_t *const __restrict__ offs,
struct json_object *const __restrict__ valroot)
{
int r = LN_WRONGPARSER;
size_t i = *offs;
char *name = NULL;
const size_t iName = i;
while(i < npb->strLen && isValidIPTablesNameChar(npb->str[i]))
++i;
if(i == iName || (i < npb->strLen && npb->str[i] != '=' && npb->str[i] != ' '))
goto done; /* no name at all! */
const ssize_t lenName = i - iName;
ssize_t iVal = -1;
size_t lenVal = i - iVal;
if(i < npb->strLen && npb->str[i] != ' ') {
/* we have a real value (not just a flag name like "DF") */
++i; /* skip '=' */
iVal = i;
while(i < npb->strLen && !isspace(npb->str[i]))
++i;
lenVal = i - iVal;
}
/* parsing OK */
*offs = i;
r = 0;
if(valroot == NULL)
goto done;
CHKN(name = malloc(lenName+1));
memcpy(name, npb->str+iName, lenName);
name[lenName] = '\0';
json_object *json;
if(iVal == -1) {
json = NULL;
} else {
CHKN(json = json_object_new_string_len(npb->str+iVal, lenVal));
}
json_object_object_add(valroot, name, json);
done:
free(name);
return r;
}
/**
* Parser for iptables logs (the structured part).
* This parser is named "v2-iptables" because of a traditional
* parser named "iptables", which we do not want to replace, at
* least right now (we may re-think this before the first release).
* For performance reasons, this works in two stages. In the first
* stage, we only detect if the motif is correct. The second stage is
* only called when we know it is. In it, we go once again over the
* message again and actually extract the data. This is done because
* data extraction is relatively expensive and in most cases we will
* have much more frequent mismatches than matches.
* Note that this motif must have at least one field, otherwise it
* could detect things that are not iptables to be it. Further limits
* may be imposed in the future as we see additional need.
* added 2015-04-30 rgerhards
*/
PARSER_Parse(v2IPTables)
size_t i = *offs;
int nfields = 0;
/* stage one */
while(i < npb->strLen) {
CHKR(parseIPTablesNameValue(npb, &i, NULL));
++nfields;
/* exactly one SP is permitted between fields */
if(i < npb->strLen && npb->str[i] == ' ')
++i;
}
if(nfields < 2) {
FAIL(LN_WRONGPARSER);
}
/* success, persist */
*parsed = i - *offs;
r = 0;
/* stage two */
if(value == NULL)
goto done;
i = *offs;
CHKN(*value = json_object_new_object());
while(i < npb->strLen) {
CHKR(parseIPTablesNameValue(npb, &i, *value));
while(i < npb->strLen && isspace(npb->str[i]))
++i;
}
done:
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
*value = NULL;
}
return r;
}
/**
* Parse JSON. This parser tries to find JSON data inside a message.
* If it finds valid JSON, it will extract it. Extra data after the
* JSON is permitted.
* Note: the json-c JSON parser treats whitespace after the actual
* json to be part of the json. So in essence, any whitespace is
* processed by this parser. We use the same semantics to keep things
* neatly in sync. If json-c changes for some reason or we switch to
* an alternate json lib, we probably need to be sure to keep that
* behaviour, and probably emulate it.
* added 2015-04-28 by rgerhards, v1.1.2
*/
PARSER_Parse(JSON)
const size_t i = *offs;
struct json_tokener *tokener = NULL;
if(npb->str[i] != '{' && npb->str[i] != ']') {
/* this can't be json, see RFC4627, Sect. 2
* see this bug in json-c:
* https://github.com/json-c/json-c/issues/181
* In any case, it's better to do this quick check,
* even if json-c did not have the bug because this
* check here is much faster than calling the parser.
*/
goto done;
}
if((tokener = json_tokener_new()) == NULL)
goto done;
struct json_object *const json
= json_tokener_parse_ex(tokener, npb->str+i, (int) (npb->strLen - i));
if(json == NULL)
goto done;
/* success, persist */
*parsed = (i + tokener->char_offset) - *offs;
r = 0; /* success */
if(value == NULL) {
json_object_put(json);
} else {
*value = json;
}
done:
if(tokener != NULL)
json_tokener_free(tokener);
return r;
}
/* check if a char is valid inside a name of a NameValue list
* The set of valid characters may be extended if there is good
* need to do so. We have selected the current set carefully, but
* may have overlooked some cases.
*/
static inline int
isValidNameChar(const char c)
{
return (isalnum(c)
|| c == '.'
|| c == '_'
|| c == '-'
) ? 1 : 0;
}
/* helper to NameValue parser, parses out a a single name=value pair
*
* name must be alphanumeric characters, value must be non-whitespace
* characters, if quoted than with symmetric quotes. Supported formats
* - name=value
* - name="value"
* - name='value'
* Note "name=" is valid and means a field with empty value.
* TODO: so far, quote characters are not permitted WITHIN quoted values.
*/
static int
parseNameValue(npb_t *const npb,
size_t *const __restrict__ offs,
struct json_object *const __restrict__ valroot)
{
int r = LN_WRONGPARSER;
size_t i = *offs;
char *name = NULL;
const size_t iName = i;
while(i < npb->strLen && isValidNameChar(npb->str[i]))
++i;
if(i == iName || npb->str[i] != '=')
goto done; /* no name at all! */
const size_t lenName = i - iName;
++i; /* skip '=' */
const size_t iVal = i;
while(i < npb->strLen && !isspace(npb->str[i]))
++i;
const size_t lenVal = i - iVal;
/* parsing OK */
*offs = i;
r = 0;
if(valroot == NULL)
goto done;
CHKN(name = malloc(lenName+1));
memcpy(name, npb->str+iName, lenName);
name[lenName] = '\0';
json_object *json;
CHKN(json = json_object_new_string_len(npb->str+iVal, lenVal));
json_object_object_add(valroot, name, json);
done:
free(name);
return r;
}
/**
* Parse CEE syslog.
* This essentially is a JSON parser, with additional restrictions:
* The message must start with "@cee:" and json must immediately follow (whitespace permitted).
* after the JSON, there must be no other non-whitespace characters.
* In other words: the message must consist of a single JSON object,
* only.
* added 2015-04-28 by rgerhards, v1.1.2
*/
PARSER_Parse(CEESyslog)
size_t i = *offs;
struct json_tokener *tokener = NULL;
struct json_object *json = NULL;
if(npb->strLen < i + 7 || /* "@cee:{}" is minimum text */
npb->str[i] != '@' ||
npb->str[i+1] != 'c' ||
npb->str[i+2] != 'e' ||
npb->str[i+3] != 'e' ||
npb->str[i+4] != ':')
goto done;
/* skip whitespace */
for(i += 5 ; i < npb->strLen && isspace(npb->str[i]) ; ++i)
/* just skip */;
if(i == npb->strLen || npb->str[i] != '{')
goto done;
/* note: we do not permit arrays in CEE mode */
if((tokener = json_tokener_new()) == NULL)
goto done;
json = json_tokener_parse_ex(tokener, npb->str+i, (int) (npb->strLen - i));
if(json == NULL)
goto done;
if(i + tokener->char_offset != npb->strLen)
goto done;
/* success, persist */
*parsed = npb->strLen;
r = 0; /* success */
if(value != NULL) {
*value = json;
json = NULL; /* do NOT free below! */
}
done:
if(tokener != NULL)
json_tokener_free(tokener);
if(json != NULL)
json_object_put(json);
return r;
}
/**
* Parser for name/value pairs.
* On entry must point to alnum char. All following chars must be
* name/value pairs delimited by whitespace up until the end of string.
* For performance reasons, this works in two stages. In the first
* stage, we only detect if the motif is correct. The second stage is
* only called when we know it is. In it, we go once again over the
* message again and actually extract the data. This is done because
* data extraction is relatively expensive and in most cases we will
* have much more frequent mismatches than matches.
* added 2015-04-25 rgerhards
*/
PARSER_Parse(NameValue)
size_t i = *offs;
/* stage one */
while(i < npb->strLen) {
CHKR(parseNameValue(npb, &i, NULL));
while(i < npb->strLen && isspace(npb->str[i]))
++i;
}
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
/* stage two */
if(value == NULL)
goto done;
i = *offs;
CHKN(*value = json_object_new_object());
while(i < npb->strLen) {
CHKR(parseNameValue(npb, &i, *value));
while(i < npb->strLen && isspace(npb->str[i]))
++i;
}
/* TODO: fix mem leak if alloc json fails */
done:
return r;
}
/**
* Parse a MAC layer address.
* The standard (IEEE 802) format for printing MAC-48 addresses in
* human-friendly form is six groups of two hexadecimal digits,
* separated by hyphens (-) or colons (:), in transmission order
* (e.g. 01-23-45-67-89-ab or 01:23:45:67:89:ab ).
* This form is also commonly used for EUI-64.
* from: http://en.wikipedia.org/wiki/MAC_address
*
* This parser must start on a hex digit.
* added 2015-05-04 by rgerhards, v1.1.2
*/
PARSER_Parse(MAC48)
size_t i = *offs;
char delim;
if(npb->strLen < i + 17 || /* this motif has exactly 17 characters */
!isxdigit(npb->str[i]) ||
!isxdigit(npb->str[i+1])
)
FAIL(LN_WRONGPARSER);
if(npb->str[i+2] == ':')
delim = ':';
else if(npb->str[i+2] == '-')
delim = '-';
else
FAIL(LN_WRONGPARSER);
/* first byte ok */
if(!isxdigit(npb->str[i+3]) ||
!isxdigit(npb->str[i+4]) ||
npb->str[i+5] != delim || /* 2nd byte ok */
!isxdigit(npb->str[i+6]) ||
!isxdigit(npb->str[i+7]) ||
npb->str[i+8] != delim || /* 3rd byte ok */
!isxdigit(npb->str[i+9]) ||
!isxdigit(npb->str[i+10]) ||
npb->str[i+11] != delim || /* 4th byte ok */
!isxdigit(npb->str[i+12]) ||
!isxdigit(npb->str[i+13]) ||
npb->str[i+14] != delim || /* 5th byte ok */
!isxdigit(npb->str[i+15]) ||
!isxdigit(npb->str[i+16]) /* 6th byte ok */
)
FAIL(LN_WRONGPARSER);
/* success, persist */
*parsed = 17;
r = 0; /* success */
if(value != NULL) {
CHKN(*value = json_object_new_string_len(npb->str+i, 17));
}
done:
return r;
}
/* This parses the extension value and updates the index
* to point to the end of it.
*/
static int
cefParseExtensionValue(npb_t *const npb,
size_t *__restrict__ iEndVal)
{
int r = 0;
size_t i = *iEndVal;
size_t iLastWordBegin;
/* first find next unquoted equal sign and record begin of
* last word in front of it - this is the actual end of the
* current name/value pair and the begin of the next one.
*/
int hadSP = 0;
int inEscape = 0;
for(iLastWordBegin = 0 ; i < npb->strLen ; ++i) {
if(inEscape) {
if(npb->str[i] != '=' &&
npb->str[i] != '\\' &&
npb->str[i] != 'r' &&
npb->str[i] != 'n')
FAIL(LN_WRONGPARSER);
inEscape = 0;
} else {
if(npb->str[i] == '=') {
break;
} else if(npb->str[i] == '\\') {
inEscape = 1;
} else if(npb->str[i] == ' ') {
hadSP = 1;
} else {
if(hadSP) {
iLastWordBegin = i;
hadSP = 0;
}
}
}
}
/* Note: iLastWordBegin can never be at offset zero, because
* the CEF header starts there!
*/
if(i < npb->strLen) {
*iEndVal = (iLastWordBegin == 0) ? i : iLastWordBegin - 1;
} else {
*iEndVal = i;
}
done:
return r;
}
/* must be positioned on first char of name, returns index
* of end of name.
* Note: ArcSight violates the CEF spec ifself: they generate
* leading underscores in their extension names, which are
* definetly not alphanumeric. We still accept them...
* They also seem to use dots.
*/
static int
cefParseName(npb_t *const npb,
size_t *const __restrict__ i)
{
int r = 0;
while(*i < npb->strLen && npb->str[*i] != '=') {
if(!(isalnum(npb->str[*i]) || npb->str[*i] == '_' || npb->str[*i] == '.'))
FAIL(LN_WRONGPARSER);
++(*i);
}
done:
return r;
}
/* parse CEF extensions. They are basically name=value
* pairs with the ugly exception that values may contain
* spaces but need NOT to be quoted. Thankfully, at least
* names are specified as being alphanumeric without spaces
* in them. So we must add a lookahead parser to check if
* a word is a name (and thus the begin of a new pair) or
* not. This is done by subroutines.
*/
static int
cefParseExtensions(npb_t *const npb,
size_t *const __restrict__ offs,
json_object *const __restrict__ jroot)
{
int r = 0;
size_t i = *offs;
size_t iName, lenName;
size_t iValue, lenValue;
char *name = NULL;
char *value = NULL;
while(i < npb->strLen) {
while(i < npb->strLen && npb->str[i] == ' ')
++i;
iName = i;
CHKR(cefParseName(npb, &i));
if(i+1 >= npb->strLen || npb->str[i] != '=')
FAIL(LN_WRONGPARSER);
lenName = i - iName;
++i; /* skip '=' */
iValue = i;
CHKR(cefParseExtensionValue(npb, &i));
lenValue = i - iValue;
++i; /* skip past value */
if(jroot != NULL) {
CHKN(name = malloc(sizeof(char) * (lenName + 1)));
memcpy(name, npb->str+iName, lenName);
name[lenName] = '\0';
CHKN(value = malloc(sizeof(char) * (lenValue + 1)));
/* copy value but escape it */
size_t iDst = 0;
for(size_t iSrc = 0 ; iSrc < lenValue ; ++iSrc) {
if(npb->str[iValue+iSrc] == '\\') {
++iSrc; /* we know the next char must exist! */
switch(npb->str[iValue+iSrc]) {
case '=': value[iDst] = '=';
break;
case 'n': value[iDst] = '\n';
break;
case 'r': value[iDst] = '\r';
break;
case '\\': value[iDst] = '\\';
break;
default: break;
}
} else {
value[iDst] = npb->str[iValue+iSrc];
}
++iDst;
}
value[iDst] = '\0';
json_object *json;
CHKN(json = json_object_new_string(value));
json_object_object_add(jroot, name, json);
free(name); name = NULL;
free(value); value = NULL;
}
}
*offs = npb->strLen; /* this parser consume everything or fails */
done:
free(name);
free(value);
return r;
}
/* gets a CEF header field. Must be positioned on the
* first char after the '|' in front of field.
* Note that '|' may be escaped as "\|", which also means
* we need to supprot "\\" (see CEF spec for details).
* We return the string in *val, if val is non-null. In
* that case we allocate memory that the caller must free.
* This is necessary because there are potentially escape
* sequences inside the string.
*/
static int
cefGetHdrField(npb_t *const npb,
size_t *const __restrict__ offs,
char **val)
{
int r = 0;
size_t i = *offs;
assert(npb->str[i] != '|');
while(i < npb->strLen && npb->str[i] != '|') {
if(npb->str[i] == '\\') {
++i; /* skip esc char */
if(npb->str[i] != '\\' && npb->str[i] != '|')
FAIL(LN_WRONGPARSER);
}
++i; /* scan to next delimiter */
}
if(npb->str[i] != '|')
FAIL(LN_WRONGPARSER);
const size_t iBegin = *offs;
/* success, persist */
*offs = i + 1;
if(val == NULL) {
r = 0;
goto done;
}
const size_t len = i - iBegin;
CHKN(*val = malloc(len + 1));
size_t iDst = 0;
for(size_t iSrc = 0 ; iSrc < len ; ++iSrc) {
if(npb->str[iBegin+iSrc] == '\\')
++iSrc; /* we already checked above that this is OK! */
(*val)[iDst++] = npb->str[iBegin+iSrc];
}
(*val)[iDst] = 0;
r = 0;
done:
return r;
}
/**
* Parser for ArcSight Common Event Format (CEF) version 0.
* added 2015-05-05 by rgerhards, v1.1.2
*/
PARSER_Parse(CEF)
size_t i = *offs;
char *vendor = NULL;
char *product = NULL;
char *version = NULL;
char *sigID = NULL;
char *name = NULL;
char *severity = NULL;
/* minumum header: "CEF:0|x|x|x|x|x|x|" --> 17 chars */
if(npb->strLen < i + 17 ||
npb->str[i] != 'C' ||
npb->str[i+1] != 'E' ||
npb->str[i+2] != 'F' ||
npb->str[i+3] != ':' ||
npb->str[i+4] != '0' ||
npb->str[i+5] != '|'
) FAIL(LN_WRONGPARSER);
i += 6; /* position on '|' */
CHKR(cefGetHdrField(npb, &i, (value == NULL) ? NULL : &vendor));
CHKR(cefGetHdrField(npb, &i, (value == NULL) ? NULL : &product));
CHKR(cefGetHdrField(npb, &i, (value == NULL) ? NULL : &version));
CHKR(cefGetHdrField(npb, &i, (value == NULL) ? NULL : &sigID));
CHKR(cefGetHdrField(npb, &i, (value == NULL) ? NULL : &name));
CHKR(cefGetHdrField(npb, &i, (value == NULL) ? NULL : &severity));
++i; /* skip over terminal '|' */
/* OK, we now know we have a good header. Now, we need
* to process extensions.
* This time, we do NOT pre-process the extension, but rather
* persist them directly to JSON. This is contrary to other
* parsers, but as the CEF header is pretty unique, this time
* it is exteremely unlike we will get a no-match during
* extension processing. Even if so, nothing bad happens, as
* the extracted data is discarded. But the regular case saves
* us processing time and complexity. The only time when we
* cannot directly process it is when the caller asks us not
* to persist the data. So this must be handled differently.
*/
size_t iBeginExtensions = i;
CHKR(cefParseExtensions(npb, &i, NULL));
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
if(value != NULL) {
CHKN(*value = json_object_new_object());
json_object *json;
CHKN(json = json_object_new_string(vendor));
json_object_object_add(*value, "DeviceVendor", json);
CHKN(json = json_object_new_string(product));
json_object_object_add(*value, "DeviceProduct", json);
CHKN(json = json_object_new_string(version));
json_object_object_add(*value, "DeviceVersion", json);
CHKN(json = json_object_new_string(sigID));
json_object_object_add(*value, "SignatureID", json);
CHKN(json = json_object_new_string(name));
json_object_object_add(*value, "Name", json);
CHKN(json = json_object_new_string(severity));
json_object_object_add(*value, "Severity", json);
json_object *jext;
CHKN(jext = json_object_new_object());
json_object_object_add(*value, "Extensions", jext);
i = iBeginExtensions;
cefParseExtensions(npb, &i, jext);
}
done:
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
value = NULL;
}
free(vendor);
free(product);
free(version);
free(sigID);
free(name);
free(severity);
return r;
}
struct data_CheckpointLEA {
char terminator; /* '\0' - do not use */
};
/**
* Parser for Checkpoint LEA on-disk format.
* added 2015-06-18 by rgerhards, v1.1.2
*/
PARSER_Parse(CheckpointLEA)
size_t i = *offs;
size_t iName, lenName;
size_t iValue, lenValue;
int foundFields = 0;
char *name = NULL;
char *val = NULL;
struct data_CheckpointLEA *const data = (struct data_CheckpointLEA*) pdata;
while(i < npb->strLen) {
while(i < npb->strLen && npb->str[i] == ' ') /* skip leading SP */
++i;
if(i == npb->strLen) { /* OK if just trailing space */
if(foundFields == 0)
FAIL(LN_WRONGPARSER);
break; /* we are done with the loop, all processed */
} else {
++foundFields;
}
iName = i;
/* TODO: do a stricter check? ... but we don't have a spec */
if(i < npb->strLen && npb->str[i] == data->terminator) {
break;
}
while(i < npb->strLen && npb->str[i] != ':') {
++i;
}
if(i+1 >= npb->strLen || npb->str[i] != ':') {
FAIL(LN_WRONGPARSER);
}
lenName = i - iName;
++i; /* skip ':' */
while(i < npb->strLen && npb->str[i] == ' ') /* skip leading SP */
++i;
iValue = i;
while(i < npb->strLen && npb->str[i] != ';') {
++i;
}
if(i+1 > npb->strLen || npb->str[i] != ';')
FAIL(LN_WRONGPARSER);
lenValue = i - iValue;
++i; /* skip ';' */
if(value != NULL) {
CHKN(name = malloc(sizeof(char) * (lenName + 1)));
memcpy(name, npb->str+iName, lenName);
name[lenName] = '\0';
CHKN(val = malloc(sizeof(char) * (lenValue + 1)));
memcpy(val, npb->str+iValue, lenValue);
val[lenValue] = '\0';
if(*value == NULL)
CHKN(*value = json_object_new_object());
json_object *json;
CHKN(json = json_object_new_string(val));
json_object_object_add(*value, name, json);
free(name); name = NULL;
free(val); val = NULL;
}
}
/* success, persist */
*parsed = i - *offs;
r = 0; /* success */
done:
free(name);
free(val);
if(r != 0 && value != NULL && *value != NULL) {
json_object_put(*value);
value = NULL;
}
return r;
}
PARSER_Construct(CheckpointLEA)
{
int r = 0;
struct data_CheckpointLEA *data = (struct data_CheckpointLEA*) calloc(1, sizeof(struct data_CheckpointLEA));
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "terminator")) {
const char *const optval = json_object_get_string(val);
if(strlen(optval) != 1) {
ln_errprintf(ctx, 0, "terminator must be exactly one character "
"but is: '%s'", optval);
r = LN_BADCONFIG;
goto done;
}
data->terminator = *optval;
}
json_object_iter_next(&it);
}
done:
*pdata = data;
return r;
}
PARSER_Destruct(CheckpointLEA)
{
free(pdata);
}
/* helper to repeat parser constructor: checks that dot field name
* is only present if there is one field inside the "parser" list.
* returns 1 if ok, 0 otherwise.
*/
static int
chkNoDupeDotInParserDefs(ln_ctx ctx, struct json_object *parsers)
{
int r = 1;
int nParsers = 0;
int nDots = 0;
if(json_object_get_type(parsers) == json_type_array) {
const int maxparsers = json_object_array_length(parsers);
for(int i = 0 ; i < maxparsers ; ++i) {
++nParsers;
struct json_object *const parser
= json_object_array_get_idx(parsers, i);
struct json_object *fname;
json_object_object_get_ex(parser, "name", &fname);
if(fname != NULL) {
if(!strcmp(json_object_get_string(fname), "."))
++nDots;
}
}
}
if(nParsers > 1 && nDots > 0) {
ln_errprintf(ctx, 0, "'repeat' parser supports dot name only "
"if single parser is used in 'parser' part, invalid "
"construct: %s", json_object_get_string(parsers));
r = 0;
}
return r;
}
/**
* "repeat" special parser.
*/
PARSER_Parse(Repeat)
struct data_Repeat *const data = (struct data_Repeat*) pdata;
struct ln_pdag *endNode = NULL;
size_t strtoffs = *offs;
size_t lastKnownGood = strtoffs;
struct json_object *json_arr = NULL;
const size_t parsedTo_save = npb->parsedTo;
do {
struct json_object *parsed_value = json_object_new_object();
r = ln_normalizeRec(npb, data->parser, strtoffs, 1,
parsed_value, &endNode);
strtoffs = npb->parsedTo;
LN_DBGPRINTF(npb->ctx, "repeat parser returns %d, parsed %zu, json: %s",
r, npb->parsedTo, json_object_to_json_string(parsed_value));
if(r != 0) {
json_object_put(parsed_value);
if(data->permitMismatchInParser) {
strtoffs = lastKnownGood; /* go back to final match */
LN_DBGPRINTF(npb->ctx, "mismatch in repeat, "
"parse ptr back to %zd", strtoffs);
goto success;
} else {
goto done;
}
}
if(json_arr == NULL) {
json_arr = json_object_new_array();
}
/* check for name=".", which means we need to place the
* value only into to array. As we do not have direct
* access to the key, we loop over our result as a work-
* around.
*/
struct json_object *toAdd = parsed_value;
struct json_object_iterator it = json_object_iter_begin(parsed_value);
struct json_object_iterator itEnd = json_object_iter_end(parsed_value);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(key[0] == '.' && key[1] == '\0') {
json_object_get(val); /* inc refcount! */
toAdd = val;
}
json_object_iter_next(&it);
}
json_object_array_add(json_arr, toAdd);
if(toAdd != parsed_value)
json_object_put(parsed_value);
LN_DBGPRINTF(npb->ctx, "arr: %s", json_object_to_json_string(json_arr));
/* now check if we shall continue */
npb->parsedTo = 0;
lastKnownGood = strtoffs; /* record pos in case of fail in while */
r = ln_normalizeRec(npb, data->while_cond, strtoffs, 1, NULL, &endNode);
LN_DBGPRINTF(npb->ctx, "repeat while returns %d, parsed %zu",
r, npb->parsedTo);
if(r == 0)
strtoffs = npb->parsedTo;
} while(r == 0);
success:
/* success, persist */
*parsed = strtoffs - *offs;
if(value == NULL) {
json_object_put(json_arr);
} else {
*value = json_arr;
}
npb->parsedTo = parsedTo_save;
r = 0; /* success */
done:
if(r != 0 && json_arr != NULL) {
json_object_put(json_arr);
}
return r;
}
PARSER_Construct(Repeat)
{
int r = 0;
struct data_Repeat *data = (struct data_Repeat*) calloc(1, sizeof(struct data_Repeat));
struct ln_pdag *endnode; /* we need this fo ln_pdagAddParser, which updates its param! */
if(json == NULL)
goto done;
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcmp(key, "parser")) {
if(chkNoDupeDotInParserDefs(ctx, val) != 1) {
r = LN_BADCONFIG;
goto done;
}
endnode = data->parser = ln_newPDAG(ctx);
json_object_get(val); /* prevent free in pdagAddParser */
CHKR(ln_pdagAddParser(ctx, &endnode, val));
endnode->flags.isTerminal = 1;
} else if(!strcmp(key, "while")) {
endnode = data->while_cond = ln_newPDAG(ctx);
json_object_get(val); /* prevent free in pdagAddParser */
CHKR(ln_pdagAddParser(ctx, &endnode, val));
endnode->flags.isTerminal = 1;
} else if(!strcasecmp(key, "option.permitMismatchInParser")) {
data->permitMismatchInParser = json_object_get_boolean(val);
} else {
ln_errprintf(ctx, 0, "invalid param for hexnumber: %s",
json_object_to_json_string(val));
}
json_object_iter_next(&it);
}
done:
if(data->parser == NULL || data->while_cond == NULL) {
ln_errprintf(ctx, 0, "repeat parser needs 'parser','while' parameters");
ln_destructRepeat(ctx, data);
r = LN_BADCONFIG;
} else {
*pdata = data;
}
return r;
}
PARSER_Destruct(Repeat)
{
struct data_Repeat *const data = (struct data_Repeat*) pdata;
if(data->parser != NULL)
ln_pdagDelete(data->parser);
if(data->while_cond != NULL)
ln_pdagDelete(data->while_cond);
free(pdata);
}
/* string escaping modes */
#define ST_ESC_NONE 0
#define ST_ESC_BACKSLASH 1
#define ST_ESC_DOUBLE 2
#define ST_ESC_BOTH 3
struct data_String {
enum { ST_QUOTE_AUTO = 0, ST_QUOTE_NONE = 1, ST_QUOTE_REQD = 2 }
quoteMode;
struct {
unsigned strip_quotes : 1;
unsigned esc_md : 2;
} flags;
enum { ST_MATCH_EXACT = 0, ST_MATCH_LAZY = 1} matching;
char qchar_begin;
char qchar_end;
char perm_chars[256]; // TODO: make this bit-wise, so we need only 32 bytes
};
static inline void
stringSetPermittedChar(struct data_String *const data, char c, int val)
{
#if 0
const int i = (unsigned) c / 8;
const int shft = (unsigned) c % 8;
const unsigned mask = ~(1 << shft);
perm_arr[i] = (perm_arr[i] & (0xff
#endif
data->perm_chars[(unsigned)c] = val;
}
static inline int
stringIsPermittedChar(struct data_String *const data, char c)
{
return data->perm_chars[(unsigned)c];
}
static void
stringAddPermittedCharArr(struct data_String *const data,
const char *const optval)
{
const size_t nchars = strlen(optval);
for(size_t i = 0 ; i < nchars ; ++i) {
stringSetPermittedChar(data, optval[i], 1);
}
}
static void
stringAddPermittedFromTo(struct data_String *const data,
const unsigned char from,
const unsigned char to)
{
assert(from <= to);
for(size_t i = from ; i <= to ; ++i) {
stringSetPermittedChar(data, (char) i, 1);
}
}
static inline void
stringAddPermittedChars(struct data_String *const data,
struct json_object *const val)
{
const char *const optval = json_object_get_string(val);
if(optval == NULL)
return;
stringAddPermittedCharArr(data, optval);
}
static void
stringAddPermittedCharsViaArray(ln_ctx ctx, struct data_String *const data,
struct json_object *const arr)
{
const int nelem = json_object_array_length(arr);
for(int i = 0 ; i < nelem ; ++i) {
struct json_object *const elem
= json_object_array_get_idx(arr, i);
struct json_object_iterator it = json_object_iter_begin(elem);
struct json_object_iterator itEnd = json_object_iter_end(elem);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcasecmp(key, "chars")) {
stringAddPermittedChars(data, val);
} else if(!strcasecmp(key, "class")) {
const char *const optval = json_object_get_string(val);
if(!strcasecmp(optval, "digit")) {
stringAddPermittedCharArr(data, "0123456789");
} else if(!strcasecmp(optval, "hexdigit")) {
stringAddPermittedCharArr(data, "0123456789aAbBcCdDeEfF");
} else if(!strcasecmp(optval, "alpha")) {
stringAddPermittedFromTo(data, 'a', 'z');
stringAddPermittedFromTo(data, 'A', 'Z');
} else if(!strcasecmp(optval, "alnum")) {
stringAddPermittedCharArr(data, "0123456789");
stringAddPermittedFromTo(data, 'a', 'z');
stringAddPermittedFromTo(data, 'A', 'Z');
} else {
ln_errprintf(ctx, 0, "invalid character class '%s'",
optval);
}
}
json_object_iter_next(&it);
}
}
}
/**
* generic string parser
*/
PARSER_Parse(String)
assert(npb->str != NULL);
assert(offs != NULL);
assert(parsed != NULL);
struct data_String *const data = (struct data_String*) pdata;
size_t i = *offs;
int bHaveQuotes = 0;
int bHadEndQuote = 0;
int bHadEscape = 0;
if(i == npb->strLen) goto done;
if((data->quoteMode == ST_QUOTE_AUTO) && (npb->str[i] == data->qchar_begin)) {
bHaveQuotes = 1;
++i;
} else if(data->quoteMode == ST_QUOTE_REQD) {
if(npb->str[i] == data->qchar_begin) {
bHaveQuotes = 1;
++i;
} else {
goto done;
}
}
/* scan string */
while(i < npb->strLen) {
if(bHaveQuotes) {
if(npb->str[i] == data->qchar_end) {
if(data->flags.esc_md == ST_ESC_DOUBLE
|| data->flags.esc_md == ST_ESC_BOTH) {
/* may be escaped, need to check! */
if(i+1 < npb->strLen
&& npb->str[i+1] == data->qchar_end) {
bHadEscape = 1;
++i;
} else { /* not escaped -> terminal */
bHadEndQuote = 1;
break;
}
} else {
bHadEndQuote = 1;
break;
}
}
}
if( npb->str[i] == '\\'
&& i+1 < npb->strLen
&& (data->flags.esc_md == ST_ESC_BACKSLASH
|| data->flags.esc_md == ST_ESC_BOTH) ) {
bHadEscape = 1;
i++; /* skip esc char */
}
/* terminating conditions */
if(!bHaveQuotes && npb->str[i] == ' ')
break;
if(!stringIsPermittedChar(data, npb->str[i]))
break;
i++;
}
if(bHaveQuotes && !bHadEndQuote)
goto done;
if(i == *offs)
goto done;
if((i - *offs < 1) || (data->matching == ST_MATCH_EXACT)) {
const size_t trmChkIdx = (bHaveQuotes) ? i+1 : i;
if(npb->str[trmChkIdx] != ' ' && trmChkIdx != npb->strLen)
goto done;
}
/* success, persist */
*parsed = i - *offs;
if(bHadEndQuote)
++(*parsed); /* skip quote */
if(value != NULL) {
size_t strt;
size_t len;
if(bHaveQuotes && data->flags.strip_quotes) {
strt = *offs + 1;
len = *parsed - 2; /* del begin AND end quote! */
} else {
strt = *offs;
len = *parsed;
}
char *const cstr = strndup(npb->str+strt, len);
CHKN(cstr);
if(bHadEscape) {
/* need to post-process string... */
for(size_t j = 0 ; cstr[j] != '\0' ; j++) {
if( (
cstr[j] == data->qchar_end
&& cstr[j+1] == data->qchar_end
&& (data->flags.esc_md == ST_ESC_DOUBLE
|| data->flags.esc_md == ST_ESC_BOTH)
)
||
(
cstr[j] == '\\'
&& (data->flags.esc_md == ST_ESC_BACKSLASH
|| data->flags.esc_md == ST_ESC_BOTH)
) ) {
/* we need to remove the escape character */
memmove(cstr+j, cstr+j+1, len-j);
}
}
}
*value = json_object_new_string(cstr);
free(cstr);
}
r = 0; /* success */
done:
return r;
}
PARSER_Construct(String)
{
int r = 0;
struct data_String *const data = (struct data_String*) calloc(1, sizeof(struct data_String));
data->quoteMode = ST_QUOTE_AUTO;
data->flags.strip_quotes = 1;
data->flags.esc_md = ST_ESC_BOTH;
data->qchar_begin = '"';
data->qchar_end = '"';
data->matching = ST_MATCH_EXACT;
memset(data->perm_chars, 0xff, sizeof(data->perm_chars));
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
const char *key = json_object_iter_peek_name(&it);
struct json_object *const val = json_object_iter_peek_value(&it);
if(!strcasecmp(key, "quoting.mode")) {
const char *const optval = json_object_get_string(val);
if(!strcasecmp(optval, "auto")) {
data->quoteMode = ST_QUOTE_AUTO;
} else if(!strcasecmp(optval, "none")) {
data->quoteMode = ST_QUOTE_NONE;
} else if(!strcasecmp(optval, "required")) {
data->quoteMode = ST_QUOTE_REQD;
} else {
ln_errprintf(ctx, 0, "invalid quoting.mode for string parser: %s",
optval);
r = LN_BADCONFIG;
goto done;
}
} else if(!strcasecmp(key, "quoting.escape.mode")) {
const char *const optval = json_object_get_string(val);
if(!strcasecmp(optval, "none")) {
data->flags.esc_md = ST_ESC_NONE;
} else if(!strcasecmp(optval, "backslash")) {
data->flags.esc_md = ST_ESC_BACKSLASH;
} else if(!strcasecmp(optval, "double")) {
data->flags.esc_md = ST_ESC_DOUBLE;
} else if(!strcasecmp(optval, "both")) {
data->flags.esc_md = ST_ESC_BOTH;
} else {
ln_errprintf(ctx, 0, "invalid quoting.escape.mode for string "
"parser: %s", optval);
r = LN_BADCONFIG;
goto done;
}
} else if(!strcasecmp(key, "quoting.char.begin")) {
const char *const optval = json_object_get_string(val);
if(strlen(optval) != 1) {
ln_errprintf(ctx, 0, "quoting.char.begin must "
"be exactly one character but is: '%s'", optval);
r = LN_BADCONFIG;
goto done;
}
data->qchar_begin = *optval;
} else if(!strcasecmp(key, "quoting.char.end")) {
const char *const optval = json_object_get_string(val);
if(strlen(optval) != 1) {
ln_errprintf(ctx, 0, "quoting.char.end must "
"be exactly one character but is: '%s'", optval);
r = LN_BADCONFIG;
goto done;
}
data->qchar_end = *optval;
} else if(!strcasecmp(key, "matching.permitted")) {
memset(data->perm_chars, 0x00, sizeof(data->perm_chars));
if(json_object_is_type(val, json_type_string)) {
stringAddPermittedChars(data, val);
} else if(json_object_is_type(val, json_type_array)) {
stringAddPermittedCharsViaArray(ctx, data, val);
} else {
ln_errprintf(ctx, 0, "matching.permitted is invalid "
"object type, given as '%s",
json_object_to_json_string(val));
}
} else if(!strcasecmp(key, "matching.mode")) {
const char *const optval = json_object_get_string(val);
if(!strcasecmp(optval, "strict")) {
data->matching = ST_MATCH_EXACT;
} else if(!strcasecmp(optval, "lazy")) {
data->matching = ST_MATCH_LAZY;
} else {
ln_errprintf(ctx, 0, "invalid matching.mode for string "
"parser: %s", optval);
r = LN_BADCONFIG;
goto done;
}
} else {
ln_errprintf(ctx, 0, "invalid param for hexnumber: %s",
json_object_to_json_string(val));
}
json_object_iter_next(&it);
}
if(data->quoteMode == ST_QUOTE_NONE)
data->flags.esc_md = ST_ESC_NONE;
*pdata = data;
done:
if(r != 0) {
free(data);
}
return r;
}
PARSER_Destruct(String)
{
free(pdata);
}
liblognorm-2.0.6/src/v1_ptree.h 0000644 0001750 0001750 00000016356 13273030617 013306 0000000 0000000 /**
* @file ptree.h
* @brief The parse tree object.
* @class ln_ptree ptree.h
*//*
* Copyright 2013 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is meant to be included by applications using liblognorm.
* For lognorm library files themselves, include "lognorm.h".
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_PTREE_H_INCLUDED
#define LIBLOGNORM_PTREE_H_INCLUDED
#include
#include
#define ORIGINAL_MSG_KEY "originalmsg"
#define UNPARSED_DATA_KEY "unparsed-data"
typedef struct ln_ptree ln_ptree; /**< the parse tree object */
typedef struct ln_fieldList_s ln_fieldList_t;
/**
* List of supported fields inside parse tree.
* This list holds all fields and their description. While normalizing,
* fields are tried in the order of this list. So the enqeue order
* dictates precedence during parsing.
*
* value list. This is a single-linked list. In a later stage, we should
* optimize it so that frequently used fields are moved "up" towards
* the root of the list. In any case, we do NOT expect this list to
* be long, as the parser should already have gotten quite specific when
* we hit a fieldconst .
*/
struct ln_fieldList_s {
es_str_t *name; /**< field name */
es_str_t *data; /**< extra data to be passed to parser */
es_str_t *raw_data; /**< extra untouched (unescaping is not done) data availble to be used by parser */
void *parser_data; /** opaque data that the field-parser understands */
void (*parser_data_destructor)(void **); /** destroy opaque data that field-parser understands */
int (*parser)(const char*, size_t, size_t*, const ln_fieldList_t *,
size_t*, struct json_object **); /**< parser to use */
ln_ptree *subtree; /**< subtree to follow if parser succeeded */
ln_fieldList_t *next; /**< list housekeeping, next node (or NULL) */
unsigned char isIPTables; /**< special parser: iptables! */
};
/* parse tree object
*/
struct ln_ptree {
ln_ctx ctx; /**< our context */
ln_ptree **parentptr; /**< pointer to *us* *inside* the parent
BUT this is NOT a pointer to the parent! */
ln_fieldList_t *froot; /**< root of field list */
ln_fieldList_t *ftail; /**< tail of field list */
struct {
unsigned isTerminal:1; /**< designates this node a terminal sequence? */
} flags;
struct json_object *tags; /* tags to assign to events of this type */
/* the respresentation below requires a lof of memory but is
* very fast. As an alternate approach, we can use a hash table
* where we ignore control characters. That should work quite well.
* But we do not do this in the initial step.
*/
ln_ptree *subtree[256];
unsigned short lenPrefix; /**< length of common prefix, 0->none */
union {
unsigned char *ptr; /**< use if data element is too large */
unsigned char data[16]; /**< fast lookup for small string */
} prefix; /**< a common prefix string for all of this node */
struct {
unsigned visited;
unsigned backtracked; /**< incremented when backtracking was initiated */
unsigned terminated;
} stats; /**< usage statistics */
};
/* Methods */
/**
* Allocates and initializes a new parse tree node.
* @memberof ln_ptree
*
* @param[in] ctx current library context. This MUST match the
* context of the parent.
* @param[in] parent pointer to the new node inside the parent
*
* @return pointer to new node or NULL on error
*/
struct ln_ptree* ln_newPTree(ln_ctx ctx, struct ln_ptree** parent);
/**
* Free a parse tree and destruct all members.
* @memberof ln_ptree
*
* @param[in] tree pointer to ptree to free
*/
void ln_deletePTree(struct ln_ptree *tree);
/**
* Free a parse tree node and destruct all members.
* @memberof ln_ptree
*
* @param[in] node pointer to free
*/
void ln_deletePTreeNode(ln_fieldList_t *node);
/**
* Add a field description to the a tree.
* The field description will be added as last field. Fields are
* parsed in the order they have been added, so be sure to care
* about the order if that matters.
* @memberof ln_ptree
*
* @param[in] tree pointer to ptree to modify
* @param[in] fielddescr a fully populated (and initialized)
* field description node
* @returns 0 on success, something else otherwise
*/
int ln_addFDescrToPTree(struct ln_ptree **tree, ln_fieldList_t *node);
/**
* Add a literal to a ptree.
* Creates new tree nodes as necessary.
* @memberof ln_ptree
*
* @param[in] tree root of tree where to add
* @param[in] str literal (string) to add
* @param[in] offs offset of where in literal adding should start
*
* @return NULL on error, otherwise pointer to deepest tree added
*/
struct ln_ptree*
ln_addPTree(struct ln_ptree *tree, es_str_t *str, size_t offs);
/**
* Display the content of a ptree (debug function).
* This is a debug aid that spits out a textual representation
* of the provided ptree via multiple calls of the debug callback.
*
* @param tree ptree to display
* @param level recursion level, must be set to 0 on initial call
*/
void ln_displayPTree(struct ln_ptree *tree, int level);
/**
* Generate a DOT graph.
* Well, actually it does not generate the graph itself, but a
* control file that is suitable for the GNU DOT tool. Such a file
* can be very useful to understand complex sample databases
* (not to mention that it is probably fun for those creating
* samples).
* The dot commands are appended to the provided string.
*
* @param[in] tree ptree to display
* @param[out] str string which receives the DOT commands.
*/
void ln_genDotPTreeGraph(struct ln_ptree *tree, es_str_t **str);
/**
* Build a ptree based on the provided string, but only if necessary.
* The passed-in tree is searched and traversed for str. If a node exactly
* matching str is found, that node is returned. If no exact match is found,
* a new node is added. Existing nodes may be split, if a so-far common
* prefix needs to be split in order to add the new node.
*
* @param[in] tree root of the current tree
* @param[in] str string to be added
* @param[in] offs offset into str where match needs to start
* (this is required for recursive calls to handle
* common prefixes)
* @return NULL on error, otherwise the ptree leaf that
* corresponds to the parameters passed.
*/
struct ln_ptree * ln_buildPTree(struct ln_ptree *tree, es_str_t *str, size_t offs);
/* internal helper for displaying stats */
void ln_fullPTreeStats(ln_ctx ctx, FILE *const fp, const int extendedStats);
#endif /* #ifndef LOGNORM_PTREE_H_INCLUDED */
liblognorm-2.0.6/src/v1_parser.h 0000644 0001750 0001750 00000021233 13273030617 013451 0000000 0000000 /*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2015 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_V1_PARSER_H_INCLUDED
#define LIBLOGNORM_V1_PARSER_H_INCLUDED
#include "v1_ptree.h"
/**
* Parser interface
* @param[in] str the to-be-parsed string
* @param[in] strLen length of the to-be-parsed string
* @param[in] offs an offset into the string
* @param[in] node fieldlist with additional data; for simple
* parsers, this sets variable "ed", which just is
* string data.
* @param[out] parsed bytes
* @param[out] json object containing parsed data (can be unused)
* @return 0 on success, something else otherwise
*/
/**
* Parser for RFC5424 date.
*/
int ln_parseRFC5424Date(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for RFC3164 date.
*/
int ln_parseRFC3164Date(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for numbers.
*/
int ln_parseNumber(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for real-number in floating-pt representation
*/
int ln_parseFloat(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for hex numbers.
*/
int ln_parseHexNumber(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for kernel timestamps.
*/
int ln_parseKernelTimestamp(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for whitespace
*/
int ln_parseWhitespace(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for Words (SP-terminated strings).
*/
int ln_parseWord(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse everything up to a specific string.
*/
int ln_parseStringTo(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for Alphabetic words (no numbers, punct, ctrl, space).
*/
int ln_parseAlpha(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse everything up to a specific character.
*/
int ln_parseCharTo(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse everything up to a specific character (relaxed constraints, suitable for CSV)
*/
int ln_parseCharSeparated(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Get everything till the rest of string.
*/
int ln_parseRest(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse an optionally quoted string.
*/
int ln_parseOpQuotedString(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node,
size_t *parsed, struct json_object **value);
/**
* Parse a quoted string.
*/
int ln_parseQuotedString(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse an ISO date.
*/
int ln_parseISODate(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse a timestamp in 12hr format.
*/
int ln_parseTime12hr(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse a timestamp in 24hr format.
*/
int ln_parseTime24hr(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse a duration.
*/
int ln_parseDuration(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for IPv4 addresses.
*/
int ln_parseIPv4(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for IPv6 addresses.
*/
int ln_parseIPv6(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse JSON.
*/
int ln_parseJSON(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse cee syslog.
*/
int ln_parseCEESyslog(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parse iptables log, the new way
*/
int ln_parsev2IPTables(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser Cisco interface specifiers
*/
int ln_parseCiscoInterfaceSpec(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node,
size_t *parsed, struct json_object **value);
/**
* Parser 48 bit MAC layer addresses.
*/
int ln_parseMAC48(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for CEF version 0.
*/
int ln_parseCEF(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for Checkpoint LEA.
*/
int ln_parseCheckpointLEA(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Parser for name/value pairs.
*/
int ln_parseNameValue(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
/**
* Get all tokens separated by tokenizer-string as array.
*/
int ln_parseTokenized(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
void* tokenized_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void tokenized_parser_data_destructor(void** dataPtr);
#ifdef FEATURE_REGEXP
/**
* Get field matching regex
*/
int ln_parseRegex(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
void* regex_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void regex_parser_data_destructor(void** dataPtr);
#endif
/**
* Match using the 'current' or 'separate rulebase' all over again from current match position
*/
int ln_parseRecursive(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
void* recursive_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void* descent_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void recursive_parser_data_destructor(void** dataPtr);
/**
* Get interpreted field
*/
int ln_parseInterpret(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node,
size_t *parsed, struct json_object **value);
void* interpret_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void interpret_parser_data_destructor(void** dataPtr);
/**
* Parse a suffixed field
*/
int ln_parseSuffixed(const char *str, size_t strlen, size_t *offs, const ln_fieldList_t *node, size_t *parsed,
struct json_object **value);
void* suffixed_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void* named_suffixed_parser_data_constructor(ln_fieldList_t *node, ln_ctx ctx);
void suffixed_parser_data_destructor(void** dataPtr);
#endif /* #ifndef LIBLOGNORM_V1_PARSER_H_INCLUDED */
liblognorm-2.0.6/src/lognorm.c 0000644 0001750 0001750 00000006736 13370250152 013226 0000000 0000000 /* liblognorm - a fast samples-based log normalization library
* Copyright 2010 by Rainer Gerhards and Adiscon GmbH.
*
* This file is part of liblognorm.
*
* Released under ASL 2.0
*/
#include "config.h"
#include
#include
#include
#include
#include "liblognorm.h"
#include "lognorm.h"
/* Code taken from rsyslog ASL 2.0 code.
* From varmojfekoj's mail on why he provided rs_strerror_r():
* There are two problems with strerror_r():
* I see you've rewritten some of the code which calls it to use only
* the supplied buffer; unfortunately the GNU implementation sometimes
* doesn't use the buffer at all and returns a pointer to some
* immutable string instead, as noted in the man page.
*
* The other problem is that on some systems strerror_r() has a return
* type of int.
*
* So I've written a wrapper function rs_strerror_r(), which should
* take care of all this and be used instead.
*/
static char *
rs_strerror_r(const int errnum, char *const buf, const size_t buflen) {
#ifndef HAVE_STRERROR_R
char *pszErr;
pszErr = strerror(errnum);
snprintf(buf, buflen, "%s", pszErr);
#else
# ifdef STRERROR_R_CHAR_P
char *p = strerror_r(errnum, buf, buflen);
if (p != buf) {
strncpy(buf, p, buflen);
buf[buflen - 1] = '\0';
}
# else
strerror_r(errnum, buf, buflen);
# endif
#endif
return buf;
}
/**
* Generate some debug message and call the caller provided callback.
*
* Will first check if a user callback is registered. If not, returns
* immediately.
*/
void
ln_dbgprintf(ln_ctx ctx, const char *fmt, ...)
{
va_list ap;
char buf[8*1024];
size_t lenBuf;
if(ctx->dbgCB == NULL)
goto done;
va_start(ap, fmt);
lenBuf = vsnprintf(buf, sizeof(buf), fmt, ap);
va_end(ap);
if(lenBuf >= sizeof(buf)) {
/* prevent buffer overruns and garbagge display */
buf[sizeof(buf) - 5] = '.';
buf[sizeof(buf) - 4] = '.';
buf[sizeof(buf) - 3] = '.';
buf[sizeof(buf) - 2] = '\n';
buf[sizeof(buf) - 1] = '\0';
lenBuf = sizeof(buf) - 1;
}
ctx->dbgCB(ctx->dbgCookie, buf, lenBuf);
done: return;
}
/**
* Generate error message and call the caller provided callback.
* eno is the OS errno. If non-zero, the OS error description
* will be added after the user-provided string.
*
* Will first check if a user callback is registered. If not, returns
* immediately.
*/
void
ln_errprintf(const ln_ctx ctx, const int eno, const char *fmt, ...)
{
va_list ap;
char buf[8*1024];
char errbuf[1024];
char finalbuf[9*1024];
size_t lenBuf;
char *msg;
if(ctx->errmsgCB == NULL)
goto done;
va_start(ap, fmt);
lenBuf = vsnprintf(buf, sizeof(buf), fmt, ap);
va_end(ap);
if(lenBuf >= sizeof(buf)) {
/* prevent buffer overrruns and garbagge display */
buf[sizeof(buf) - 5] = '.';
buf[sizeof(buf) - 4] = '.';
buf[sizeof(buf) - 3] = '.';
buf[sizeof(buf) - 2] = '\n';
buf[sizeof(buf) - 1] = '\0';
lenBuf = sizeof(buf) - 1;
}
if(eno != 0) {
rs_strerror_r(eno, errbuf, sizeof(errbuf)-1);
lenBuf = snprintf(finalbuf, sizeof(finalbuf), "%s: %s", buf, errbuf);
msg = finalbuf;
} else {
msg = buf;
}
if(ctx->conf_file != NULL) {
/* error during config processing, add line info */
const char *const m = strdup(msg);
lenBuf = snprintf(finalbuf, sizeof(finalbuf), "rulebase file %s[%d]: %s",
ctx->conf_file, ctx->conf_ln_nbr, m);
msg = finalbuf;
free((void*) m);
}
ctx->errmsgCB(ctx->dbgCookie, msg, lenBuf);
ln_dbgprintf(ctx, "%s", msg);
done: return;
}
void
ln_enableDebug(ln_ctx ctx, int i)
{
ctx->debug = i & 0x01;
}
liblognorm-2.0.6/src/pdag.h 0000644 0001750 0001750 00000020323 13370250152 012455 0000000 0000000 /**
* @file pdag.h
* @brief The parse DAG object.
* @class ln_pdag pdag.h
*//*
* Copyright 2015 by Rainer Gerhards and Adiscon GmbH.
*
* Released under ASL 2.0.
*/
#ifndef LIBLOGNORM_PDAG_H_INCLUDED
#define LIBLOGNORM_PDAG_H_INCLUDED
#include
#include
#include
#define META_KEY "metadata"
#define ORIGINAL_MSG_KEY "originalmsg"
#define UNPARSED_DATA_KEY "unparsed-data"
#define EXEC_PATH_KEY "exec-path"
#define META_RULE_KEY "rule"
#define RULE_MOCKUP_KEY "mockup"
#define RULE_LOCATION_KEY "location"
typedef struct ln_pdag ln_pdag; /**< the parse DAG object */
typedef struct ln_parser_s ln_parser_t;
typedef struct npb npb_t;
typedef uint8_t prsid_t;
struct ln_type_pdag;
/**
* parser IDs.
*
* These identfy a parser. VERY IMPORTANT: they must start at zero
* and continously increment. They must exactly match the index
* of the respective parser inside the parser lookup table.
*/
#define PRS_LITERAL 0
#define PRS_REPEAT 1
#if 0
#define PRS_DATE_RFC3164 1
#define PRS_DATE_RFC5424 2
#define PRS_NUMBER 3
#define PRS_FLOAT 4
#define PRS_HEXNUMBER 5
#define PRS_KERNEL_TIMESTAMP 6
#define PRS_WHITESPACE 7
#define PRS_IPV4 8
#define PRS_IPV6 9
#define PRS_WORD 10
#define PRS_ALPHA 11
#define PRS_REST 12
#define PRS_OP_QUOTED_STRING 13
#define PRS_QUOTED_STRING 14
#define PRS_DATE_ISO 15
#define PRS_TIME_24HR 16
#define PRS_TIME_12HR 17
#define PRS_DURATION 18
#define PRS_CISCO_INTERFACE_SPEC 19
#define PRS_NAME_VALUE_LIST 20
#define PRS_JSON 21
#define PRS_CEE_SYSLOG 22
#define PRS_MAC48 23
#define PRS_CEF 24
#define PRS_CHECKPOINT_LEA 25
#define PRS_v2_IPTABLES 26
#define PRS_STRING_TO 27
#define PRS_CHAR_TO 28
#define PRS_CHAR_SEP 29
#endif
#define PRS_CUSTOM_TYPE 254
#define PRS_INVALID 255
/* NOTE: current max limit on parser ID is 255, because we use uint8_t
* for the prsid_t type (which gains cache performance). If more parsers
* come up, the type must be modified.
*/
/**
* object describing a specific parser instance.
*/
struct ln_parser_s {
prsid_t prsid; /**< parser ID (for lookup table) */
ln_pdag *node; /**< node to branch to if parser succeeded */
void *parser_data; /**< opaque data that the field-parser understands */
size_t custTypeIdx; /**< index to custom type, if such is used */
int prio; /**< priority (combination of user- and parser-specific parts) */
const char *name; /**< field name */
const char *conf; /**< configuration as printable json for comparison reasons */
};
struct ln_parser_info {
const char *name; /**< parser name as used in rule base */
int prio; /**< parser specific prio in range 0..255 */
int (*construct)(ln_ctx ctx, json_object *const json, void **);
int (*parser)(npb_t *npb, size_t*, void *const,
size_t*, struct json_object **); /**< parser to use */
void (*destruct)(ln_ctx, void *const); /* note: destructor is only needed if parser data exists */
#ifdef ADVANCED_STATS
uint64_t called;
uint64_t success;
#endif
};
/* parse DAG object
*/
struct ln_pdag {
ln_ctx ctx; /**< our context */ // TODO: why do we need it?
ln_parser_t *parsers; /* array of parsers to try */
prsid_t nparsers; /**< current table size (prsid_t slighly abused) */
struct {
unsigned isTerminal:1; /**< designates this node a terminal sequence */
unsigned visited:1; /**< work var for recursive procedures */
} flags;
struct json_object *tags; /**< tags to assign to events of this type */
int refcnt; /**< reference count for deleting tracking */
struct {
unsigned called;
unsigned backtracked; /**< incremented when backtracking was initiated */
unsigned terminated;
} stats; /**< usage statistics */
const char *rb_id; /**< human-readable rulebase identifier, for stats etc */
// experimental, move outside later
const char *rb_file;
unsigned int rb_lineno;
};
#ifdef ADVANCED_STATS
struct advstats {
int pathlen;
int parser_calls; /**< parser calls in general during path */
int lit_parser_calls; /**< same just for the literal parser */
int backtracked;
int recursion_level;
es_str_t *exec_path;
};
#define ADVSTATS_MAX_ENTITIES 100
extern int advstats_max_pathlen;
extern int advstats_pathlens[ADVSTATS_MAX_ENTITIES];
extern int advstats_max_backtracked;
extern int advstats_backtracks[ADVSTATS_MAX_ENTITIES];
#endif
/** the "normalization paramater block" (npb)
* This structure is passed to all normalization routines including
* parsers. It contains data that commonly needs to be passed,
* like the to be parsed string and its length, as well as read/write
* data which is used to track information over the general
* normalization process (like the execution path, if requested).
* The main purpose is to save stack writes by eliminating the
* need for using multiple function parameters. Note that it
* must be carefully considered which items to add to the
* npb - those that change from recursion level to recursion
* level are NOT to be placed here.
*/
struct npb {
ln_ctx ctx;
const char *str; /**< to-be-normalized message */
size_t strLen; /**< length of it */
size_t parsedTo; /**< up to which byte could this be parsed? */
es_str_t *rule; /**< a mock-up of the rule used to parse */
es_str_t *exec_path;
#ifdef ADVANCED_STATS
int pathlen;
int backtracked;
int recursion_level;
struct advstats astats;
#endif
};
/* Methods */
/**
* Allocates and initializes a new parse DAG node.
* @memberof ln_pdag
*
* @param[in] ctx current library context. This MUST match the
* context of the parent.
* @param[in] parent pointer to the new node inside the parent
*
* @return pointer to new node or NULL on error
*/
struct ln_pdag* ln_newPDAG(ln_ctx ctx);
/**
* Free a parse DAG and destruct all members.
* @memberof ln_pdag
*
* @param[in] DAG pointer to pdag to free
*/
void ln_pdagDelete(struct ln_pdag *DAG);
/**
* Add parser to dag node.
* Works on unoptimzed dag.
*
* @param[in] pdag pointer to pdag to modify
* @param[in] parser parser definition
* @returns 0 on success, something else otherwise
*/
int ln_pdagAddParser(ln_ctx ctx, struct ln_pdag **pdag, json_object *);
/**
* Display the content of a pdag (debug function).
* This is a debug aid that spits out a textual representation
* of the provided pdag via multiple calls of the debug callback.
*
* @param DAG pdag to display
*/
void ln_displayPDAG(ln_ctx ctx);
/**
* Generate a DOT graph.
* Well, actually it does not generate the graph itself, but a
* control file that is suitable for the GNU DOT tool. Such a file
* can be very useful to understand complex sample databases
* (not to mention that it is probably fun for those creating
* samples).
* The dot commands are appended to the provided string.
*
* @param[in] DAG pdag to display
* @param[out] str string which receives the DOT commands.
*/
void ln_genDotPDAGGraph(struct ln_pdag *DAG, es_str_t **str);
/**
* Build a pdag based on the provided string, but only if necessary.
* The passed-in DAG is searched and traversed for str. If a node exactly
* matching str is found, that node is returned. If no exact match is found,
* a new node is added. Existing nodes may be split, if a so-far common
* prefix needs to be split in order to add the new node.
*
* @param[in] DAG root of the current DAG
* @param[in] str string to be added
* @param[in] offs offset into str where match needs to start
* (this is required for recursive calls to handle
* common prefixes)
* @return NULL on error, otherwise the pdag leaf that
* corresponds to the parameters passed.
*/
struct ln_pdag * ln_buildPDAG(struct ln_pdag *DAG, es_str_t *str, size_t offs);
prsid_t ln_parserName2ID(const char *const __restrict__ name);
int ln_pdagOptimize(ln_ctx ctx);
void ln_fullPdagStats(ln_ctx ctx, FILE *const fp, const int);
ln_parser_t * ln_newLiteralParser(ln_ctx ctx, char lit);
ln_parser_t* ln_newParser(ln_ctx ctx, json_object *const prscnf);
struct ln_type_pdag * ln_pdagFindType(ln_ctx ctx, const char *const __restrict__ name, const int bAdd);
void ln_fullPDagStatsDOT(ln_ctx ctx, FILE *const fp);
/* friends */
int
ln_normalizeRec(npb_t *const __restrict__ npb,
struct ln_pdag *dag,
const size_t offs,
const int bPartialMatch,
struct json_object *json,
struct ln_pdag **endNode
);
#endif /* #ifndef LOGNORM_PDAG_H_INCLUDED */
liblognorm-2.0.6/src/parser.h 0000644 0001750 0001750 00000006132 13370250152 013040 0000000 0000000 /*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2015 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_PARSER_H_INCLUDED
#define LIBLOGNORM_PARSER_H_INCLUDED
#include "pdag.h"
/**
* Parser interface
* @param[in] str the to-be-parsed string
* @param[in] strLen length of the to-be-parsed string
* @param[in] offs an offset into the string
* @param[out] parsed bytes
* @param[out] ptr to json object containing parsed data (can be unused)
* if NULL on input, object is NOT persisted
* @return 0 on success, something else otherwise
*/
// TODO #warning check how to handle "value" - does it need to be set to NULL?
#define PARSERDEF_NO_DATA(parser) \
int ln_v2_parse##parser(npb_t *npb, size_t *offs, void *const, size_t *parsed, struct json_object **value)
#define PARSERDEF(parser) \
int ln_construct##parser(ln_ctx ctx, json_object *const json, void **pdata); \
int ln_v2_parse##parser(npb_t *npb, size_t *offs, void *const, size_t *parsed, struct json_object **value); \
void ln_destruct##parser(ln_ctx ctx, void *const pdata)
PARSERDEF(RFC5424Date);
PARSERDEF(RFC3164Date);
PARSERDEF(Number);
PARSERDEF(Float);
PARSERDEF(HexNumber);
PARSERDEF_NO_DATA(KernelTimestamp);
PARSERDEF_NO_DATA(Whitespace);
PARSERDEF_NO_DATA(Word);
PARSERDEF(StringTo);
PARSERDEF_NO_DATA(Alpha);
PARSERDEF(Literal);
PARSERDEF(CharTo);
PARSERDEF(CharSeparated);
PARSERDEF(Repeat);
PARSERDEF(String);
PARSERDEF_NO_DATA(Rest);
PARSERDEF_NO_DATA(OpQuotedString);
PARSERDEF_NO_DATA(QuotedString);
PARSERDEF_NO_DATA(ISODate);
PARSERDEF_NO_DATA(Time12hr);
PARSERDEF_NO_DATA(Time24hr);
PARSERDEF_NO_DATA(Duration);
PARSERDEF_NO_DATA(IPv4);
PARSERDEF_NO_DATA(IPv6);
PARSERDEF_NO_DATA(JSON);
PARSERDEF_NO_DATA(CEESyslog);
PARSERDEF_NO_DATA(v2IPTables);
PARSERDEF_NO_DATA(CiscoInterfaceSpec);
PARSERDEF_NO_DATA(MAC48);
PARSERDEF_NO_DATA(CEF);
PARSERDEF(CheckpointLEA);
PARSERDEF_NO_DATA(NameValue);
#undef PARSERDEF_NO_DATA
/* utility functions */
int ln_combineData_Literal(void *const org, void *const add);
/* definitions for friends */
struct data_Repeat {
ln_pdag *parser;
ln_pdag *while_cond;
int permitMismatchInParser;
};
#endif /* #ifndef LIBLOGNORM_PARSER_H_INCLUDED */
liblognorm-2.0.6/src/lognorm-features.h.in 0000644 0001750 0001750 00000000076 13273030617 015447 0000000 0000000 #if @FEATURE_REGEXP@
#define LOGNORM_REGEX_SUPPORTED 1
#endif
liblognorm-2.0.6/src/enc_csv.c 0000644 0001750 0001750 00000012701 13273030617 013162 0000000 0000000 /**
* @file enc_csv.c
* Encoder for CSV format. Note: CEE currently think about what a
* CEE-compliant CSV format may look like. As such, the format of
* this output will most probably change once the final decision
* has been made. At this time (2010-12), I do NOT even try to
* stay inline with the discussion.
*
* This file contains code from all related objects that is required in
* order to encode this format. The core idea of putting all of this into
* a single file is that this makes it very straightforward to write
* encoders for different encodings, as all is in one place.
*
*/
/*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2018 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include "lognorm.h"
#include "internal.h"
#include "enc.h"
static char hexdigit[16] =
{'0', '1', '2', '3', '4', '5', '6', '7', '8',
'9', 'A', 'B', 'C', 'D', 'E', 'F' };
/* TODO: CSV encoding for Unicode characters is as of RFC4627 not fully
* supported. The algorithm is that we must build the wide character from
* UTF-8 (if char > 127) and build the full 4-octet Unicode character out
* of it. Then, this needs to be encoded. Currently, we work on a
* byte-by-byte basis, which simply is incorrect.
* rgerhards, 2010-11-09
*/
static int
ln_addValue_CSV(const char *buf, es_str_t **str)
{
int r;
unsigned char c;
es_size_t i;
char numbuf[4];
int j;
assert(str != NULL);
assert(*str != NULL);
assert(buf != NULL);
for(i = 0; i < strlen(buf); i++) {
c = buf[i];
if((c >= 0x23 && c <= 0x5b)
|| (c >= 0x5d /* && c <= 0x10FFFF*/)
|| c == 0x20 || c == 0x21) {
/* no need to escape */
es_addChar(str, c);
} else {
/* we must escape, try RFC4627-defined special sequences first */
switch(c) {
case '\0':
es_addBuf(str, "\\u0000", 6);
break;
case '\"':
es_addBuf(str, "\\\"", 2);
break;
case '\\':
es_addBuf(str, "\\\\", 2);
break;
case '\010':
es_addBuf(str, "\\b", 2);
break;
case '\014':
es_addBuf(str, "\\f", 2);
break;
case '\n':
es_addBuf(str, "\\n", 2);
break;
case '\r':
es_addBuf(str, "\\r", 2);
break;
case '\t':
es_addBuf(str, "\\t", 2);
break;
default:
/* TODO : proper Unicode encoding (see header comment) */
for(j = 0 ; j < 4 ; ++j) {
numbuf[3-j] = hexdigit[c % 16];
c = c / 16;
}
es_addBuf(str, "\\u", 2);
es_addBuf(str, numbuf, 4);
break;
}
}
}
r = 0;
return r;
}
static int
ln_addField_CSV(struct json_object *field, es_str_t **str)
{
int r, i;
struct json_object *obj;
int needComma = 0;
const char *value;
assert(field != NULL);
assert(str != NULL);
assert(*str != NULL);
switch(json_object_get_type(field)) {
case json_type_array:
CHKR(es_addChar(str, '['));
for (i = json_object_array_length(field) - 1; i >= 0; i--) {
if(needComma)
es_addChar(str, ',');
else
needComma = 1;
CHKN(obj = json_object_array_get_idx(field, i));
CHKN(value = json_object_get_string(obj));
CHKR(ln_addValue_CSV(value, str));
}
CHKR(es_addChar(str, ']'));
break;
case json_type_string:
case json_type_int:
CHKN(value = json_object_get_string(field));
CHKR(ln_addValue_CSV(value, str));
break;
case json_type_null:
case json_type_boolean:
case json_type_double:
case json_type_object:
CHKR(es_addBuf(str, "***unsupported type***", sizeof("***unsupported type***")-1));
break;
default:
CHKR(es_addBuf(str, "***OBJECT***", sizeof("***OBJECT***")-1));
}
r = 0;
done:
return r;
}
int
ln_fmtEventToCSV(struct json_object *json, es_str_t **str, es_str_t *extraData)
{
int r = -1;
int needComma = 0;
struct json_object *field;
char *namelist = NULL, *name, *nn;
assert(json != NULL);
assert(json_object_is_type(json, json_type_object));
if((*str = es_newStr(256)) == NULL)
goto done;
if(extraData == NULL)
goto done;
CHKN(namelist = es_str2cstr(extraData, NULL));
for (name = namelist; name != NULL; name = nn) {
for (nn = name; *nn != '\0' && *nn != ',' && *nn != ' '; nn++)
{ /* do nothing */ }
if (*nn == '\0') {
nn = NULL;
} else {
*nn = '\0';
nn++;
}
json_object_object_get_ex(json, name, &field);
if (needComma) {
CHKR(es_addChar(str, ','));
} else {
needComma = 1;
}
if (field != NULL) {
CHKR(es_addChar(str, '"'));
ln_addField_CSV(field, str);
CHKR(es_addChar(str, '"'));
}
}
r = 0;
done:
if (namelist != NULL)
free(namelist);
return r;
}
liblognorm-2.0.6/src/Makefile.in 0000644 0001750 0001750 00000133116 13370251155 013447 0000000 0000000 # Makefile.in generated by automake 1.15.1 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2017 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
bin_PROGRAMS = lognormalizer$(EXEEXT)
check_PROGRAMS = ln_test$(EXEEXT)
subdir = src
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \
$(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
$(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(include_HEADERS) \
$(am__DIST_COMMON)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES = lognorm-features.h
CONFIG_CLEAN_VPATH_FILES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
am__installdirs = "$(DESTDIR)$(libdir)" "$(DESTDIR)$(bindir)" \
"$(DESTDIR)$(includedir)"
LTLIBRARIES = $(lib_LTLIBRARIES)
am__DEPENDENCIES_1 =
liblognorm_la_DEPENDENCIES = $(am__DEPENDENCIES_1) \
$(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1)
am_liblognorm_la_OBJECTS = liblognorm_la-liblognorm.lo \
liblognorm_la-pdag.lo liblognorm_la-annot.lo \
liblognorm_la-samp.lo liblognorm_la-lognorm.lo \
liblognorm_la-parser.lo liblognorm_la-enc_syslog.lo \
liblognorm_la-enc_csv.lo liblognorm_la-enc_xml.lo \
liblognorm_la-v1_liblognorm.lo liblognorm_la-v1_parser.lo \
liblognorm_la-v1_ptree.lo liblognorm_la-v1_samp.lo
liblognorm_la_OBJECTS = $(am_liblognorm_la_OBJECTS)
AM_V_lt = $(am__v_lt_@AM_V@)
am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@)
am__v_lt_0 = --silent
am__v_lt_1 =
liblognorm_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(liblognorm_la_LDFLAGS) $(LDFLAGS) -o $@
PROGRAMS = $(bin_PROGRAMS)
am__objects_1 = ln_test-lognormalizer.$(OBJEXT)
am_ln_test_OBJECTS = $(am__objects_1)
ln_test_OBJECTS = $(am_ln_test_OBJECTS)
am__DEPENDENCIES_2 = $(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1) \
$(am__DEPENDENCIES_1) ../compat/compat.la
ln_test_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(ln_test_LDFLAGS) $(LDFLAGS) -o $@
am_lognormalizer_OBJECTS = lognormalizer-lognormalizer.$(OBJEXT)
lognormalizer_OBJECTS = $(am_lognormalizer_OBJECTS)
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir)
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
am__mv = mv -f
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \
$(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \
$(AM_CFLAGS) $(CFLAGS)
AM_V_CC = $(am__v_CC_@AM_V@)
am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@)
am__v_CC_0 = @echo " CC " $@;
am__v_CC_1 =
CCLD = $(CC)
LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(AM_LDFLAGS) $(LDFLAGS) -o $@
AM_V_CCLD = $(am__v_CCLD_@AM_V@)
am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@)
am__v_CCLD_0 = @echo " CCLD " $@;
am__v_CCLD_1 =
SOURCES = $(liblognorm_la_SOURCES) $(ln_test_SOURCES) \
$(lognormalizer_SOURCES)
DIST_SOURCES = $(liblognorm_la_SOURCES) $(ln_test_SOURCES) \
$(lognormalizer_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
HEADERS = $(include_HEADERS)
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
# Read a list of newline-separated strings from the standard input,
# and print each of them once, without duplicates. Input order is
# *not* preserved.
am__uniquify_input = $(AWK) '\
BEGIN { nonempty = 0; } \
{ items[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in items) print i; }; } \
'
# Make sure the list of sources is unique. This is necessary because,
# e.g., the same source file might be shared among _SOURCES variables
# for different programs/libraries.
am__define_uniq_tagged_files = \
list='$(am__tagged_files)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | $(am__uniquify_input)`
ETAGS = etags
CTAGS = ctags
am__DIST_COMMON = $(srcdir)/Makefile.in \
$(srcdir)/lognorm-features.h.in $(top_srcdir)/depcomp
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
DLLTOOL = @DLLTOOL@
DSYMUTIL = @DSYMUTIL@
DUMPBIN = @DUMPBIN@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
FEATURE_REGEXP = @FEATURE_REGEXP@
FGREP = @FGREP@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
JSON_C_CFLAGS = @JSON_C_CFLAGS@
JSON_C_LIBS = @JSON_C_LIBS@
LD = @LD@
LDFLAGS = @LDFLAGS@
LIBESTR_CFLAGS = @LIBESTR_CFLAGS@
LIBESTR_LIBS = @LIBESTR_LIBS@
LIBLOGNORM_CFLAGS = @LIBLOGNORM_CFLAGS@
LIBLOGNORM_LIBS = @LIBLOGNORM_LIBS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LIBTOOL = @LIBTOOL@
LIPO = @LIPO@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
MAKEINFO = @MAKEINFO@
MANIFEST_TOOL = @MANIFEST_TOOL@
MKDIR_P = @MKDIR_P@
NM = @NM@
NMEDIT = @NMEDIT@
OBJDUMP = @OBJDUMP@
OBJEXT = @OBJEXT@
OTOOL = @OTOOL@
OTOOL64 = @OTOOL64@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PCRE_CFLAGS = @PCRE_CFLAGS@
PCRE_LIBS = @PCRE_LIBS@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
RANLIB = @RANLIB@
SED = @SED@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
SPHINXBUILD = @SPHINXBUILD@
STRIP = @STRIP@
VALGRIND = @VALGRIND@
VERSION = @VERSION@
WARN_CFLAGS = @WARN_CFLAGS@
WARN_LDFLAGS = @WARN_LDFLAGS@
WARN_SCANNERFLAGS = @WARN_SCANNERFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_AR = @ac_ct_AR@
ac_ct_CC = @ac_ct_CC@
ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host = @host@
host_alias = @host_alias@
host_cpu = @host_cpu@
host_os = @host_os@
host_vendor = @host_vendor@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkg_config_libs_private = @pkg_config_libs_private@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
# Uncomment for debugging
DEBUG = -g
PTHREADS_CFLAGS = -pthread
lognormalizer_SOURCES = lognormalizer.c
lognormalizer_CPPFLAGS = -I$(top_srcdir) $(WARN_CFLAGS) $(JSON_C_CFLAGS) $(LIBESTR_CFLAGS)
lognormalizer_LDADD = $(JSON_C_LIBS) $(LIBLOGNORM_LIBS) $(LIBESTR_LIBS) ../compat/compat.la
lognormalizer_DEPENDENCIES = liblognorm.la
ln_test_SOURCES = $(lognormalizer_SOURCES)
ln_test_CPPFLAGS = $(lognormalizer_CPPFLAGS)
ln_test_LDADD = $(lognormalizer_LDADD)
ln_test_DEPENDENCIES = $(lognormalizer_DEPENDENCIES)
ln_test_LDFLAGS = -no-install
lib_LTLIBRARIES = liblognorm.la
# Users violently requested that v2 shall be able to understand v1
# rulebases. As both are very very different, we now include the
# full v1 engine for this purpose. This here is what does this.
# see also: https://github.com/rsyslog/liblognorm/issues/103
liblognorm_la_SOURCES = liblognorm.c pdag.c annot.c samp.c lognorm.c \
parser.c enc_syslog.c enc_csv.c enc_xml.c v1_liblognorm.c \
v1_parser.c v1_ptree.c v1_samp.c
liblognorm_la_CPPFLAGS = $(JSON_C_CFLAGS) $(WARN_CFLAGS) $(LIBESTR_CFLAGS) $(PCRE_CFLAGS)
liblognorm_la_LIBADD = $(rt_libs) $(JSON_C_LIBS) $(LIBESTR_LIBS) $(PCRE_LIBS) -lestr
# info on version-info:
# http://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
# Note: v2 now starts at version 5, as v1 previously also had 4
liblognorm_la_LDFLAGS = -version-info 6:0:1
# and now the old cruft:
EXTRA_DIST = internal.h liblognorm.h lognorm.h pdag.h annot.h samp.h \
enc.h parser.h helpers.h v1_liblognorm.h v1_parser.h v1_samp.h \
v1_ptree.h
include_HEADERS = liblognorm.h samp.h lognorm.h pdag.h annot.h enc.h parser.h lognorm-features.h
all: all-am
.SUFFIXES:
.SUFFIXES: .c .lo .o .obj
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
&& { if test -f $@; then exit 0; else break; fi; }; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu src/Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --gnu src/Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(am__aclocal_m4_deps):
lognorm-features.h: $(top_builddir)/config.status $(srcdir)/lognorm-features.h.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
install-libLTLIBRARIES: $(lib_LTLIBRARIES)
@$(NORMAL_INSTALL)
@list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \
list2=; for p in $$list; do \
if test -f $$p; then \
list2="$$list2 $$p"; \
else :; fi; \
done; \
test -z "$$list2" || { \
echo " $(MKDIR_P) '$(DESTDIR)$(libdir)'"; \
$(MKDIR_P) "$(DESTDIR)$(libdir)" || exit 1; \
echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \
$(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \
}
uninstall-libLTLIBRARIES:
@$(NORMAL_UNINSTALL)
@list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \
for p in $$list; do \
$(am__strip_dir) \
echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \
$(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \
done
clean-libLTLIBRARIES:
-test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES)
@list='$(lib_LTLIBRARIES)'; \
locs=`for p in $$list; do echo $$p; done | \
sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \
sort -u`; \
test -z "$$locs" || { \
echo rm -f $${locs}; \
rm -f $${locs}; \
}
liblognorm.la: $(liblognorm_la_OBJECTS) $(liblognorm_la_DEPENDENCIES) $(EXTRA_liblognorm_la_DEPENDENCIES)
$(AM_V_CCLD)$(liblognorm_la_LINK) -rpath $(libdir) $(liblognorm_la_OBJECTS) $(liblognorm_la_LIBADD) $(LIBS)
install-binPROGRAMS: $(bin_PROGRAMS)
@$(NORMAL_INSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(bindir)'"; \
$(MKDIR_P) "$(DESTDIR)$(bindir)" || exit 1; \
fi; \
for p in $$list; do echo "$$p $$p"; done | \
sed 's/$(EXEEXT)$$//' | \
while read p p1; do if test -f $$p \
|| test -f $$p1 \
; then echo "$$p"; echo "$$p"; else :; fi; \
done | \
sed -e 'p;s,.*/,,;n;h' \
-e 's|.*|.|' \
-e 'p;x;s,.*/,,;s/$(EXEEXT)$$//;$(transform);s/$$/$(EXEEXT)/' | \
sed 'N;N;N;s,\n, ,g' | \
$(AWK) 'BEGIN { files["."] = ""; dirs["."] = 1 } \
{ d=$$3; if (dirs[d] != 1) { print "d", d; dirs[d] = 1 } \
if ($$2 == $$4) files[d] = files[d] " " $$1; \
else { print "f", $$3 "/" $$4, $$1; } } \
END { for (d in files) print "f", d, files[d] }' | \
while read type dir files; do \
if test "$$dir" = .; then dir=; else dir=/$$dir; fi; \
test -z "$$files" || { \
echo " $(INSTALL_PROGRAM_ENV) $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL_PROGRAM) $$files '$(DESTDIR)$(bindir)$$dir'"; \
$(INSTALL_PROGRAM_ENV) $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL_PROGRAM) $$files "$(DESTDIR)$(bindir)$$dir" || exit $$?; \
} \
; done
uninstall-binPROGRAMS:
@$(NORMAL_UNINSTALL)
@list='$(bin_PROGRAMS)'; test -n "$(bindir)" || list=; \
files=`for p in $$list; do echo "$$p"; done | \
sed -e 'h;s,^.*/,,;s/$(EXEEXT)$$//;$(transform)' \
-e 's/$$/$(EXEEXT)/' \
`; \
test -n "$$list" || exit 0; \
echo " ( cd '$(DESTDIR)$(bindir)' && rm -f" $$files ")"; \
cd "$(DESTDIR)$(bindir)" && rm -f $$files
clean-binPROGRAMS:
@list='$(bin_PROGRAMS)'; test -n "$$list" || exit 0; \
echo " rm -f" $$list; \
rm -f $$list || exit $$?; \
test -n "$(EXEEXT)" || exit 0; \
list=`for p in $$list; do echo "$$p"; done | sed 's/$(EXEEXT)$$//'`; \
echo " rm -f" $$list; \
rm -f $$list
clean-checkPROGRAMS:
@list='$(check_PROGRAMS)'; test -n "$$list" || exit 0; \
echo " rm -f" $$list; \
rm -f $$list || exit $$?; \
test -n "$(EXEEXT)" || exit 0; \
list=`for p in $$list; do echo "$$p"; done | sed 's/$(EXEEXT)$$//'`; \
echo " rm -f" $$list; \
rm -f $$list
ln_test$(EXEEXT): $(ln_test_OBJECTS) $(ln_test_DEPENDENCIES) $(EXTRA_ln_test_DEPENDENCIES)
@rm -f ln_test$(EXEEXT)
$(AM_V_CCLD)$(ln_test_LINK) $(ln_test_OBJECTS) $(ln_test_LDADD) $(LIBS)
lognormalizer$(EXEEXT): $(lognormalizer_OBJECTS) $(lognormalizer_DEPENDENCIES) $(EXTRA_lognormalizer_DEPENDENCIES)
@rm -f lognormalizer$(EXEEXT)
$(AM_V_CCLD)$(LINK) $(lognormalizer_OBJECTS) $(lognormalizer_LDADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-annot.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-enc_csv.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-enc_syslog.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-enc_xml.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-liblognorm.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-lognorm.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-parser.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-pdag.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-samp.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-v1_liblognorm.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-v1_parser.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-v1_ptree.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/liblognorm_la-v1_samp.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/ln_test-lognormalizer.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/lognormalizer-lognormalizer.Po@am__quote@
.c.o:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $<
.c.obj:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'`
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
.c.lo:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $<
liblognorm_la-liblognorm.lo: liblognorm.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-liblognorm.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-liblognorm.Tpo -c -o liblognorm_la-liblognorm.lo `test -f 'liblognorm.c' || echo '$(srcdir)/'`liblognorm.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-liblognorm.Tpo $(DEPDIR)/liblognorm_la-liblognorm.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='liblognorm.c' object='liblognorm_la-liblognorm.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-liblognorm.lo `test -f 'liblognorm.c' || echo '$(srcdir)/'`liblognorm.c
liblognorm_la-pdag.lo: pdag.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-pdag.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-pdag.Tpo -c -o liblognorm_la-pdag.lo `test -f 'pdag.c' || echo '$(srcdir)/'`pdag.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-pdag.Tpo $(DEPDIR)/liblognorm_la-pdag.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='pdag.c' object='liblognorm_la-pdag.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-pdag.lo `test -f 'pdag.c' || echo '$(srcdir)/'`pdag.c
liblognorm_la-annot.lo: annot.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-annot.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-annot.Tpo -c -o liblognorm_la-annot.lo `test -f 'annot.c' || echo '$(srcdir)/'`annot.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-annot.Tpo $(DEPDIR)/liblognorm_la-annot.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='annot.c' object='liblognorm_la-annot.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-annot.lo `test -f 'annot.c' || echo '$(srcdir)/'`annot.c
liblognorm_la-samp.lo: samp.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-samp.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-samp.Tpo -c -o liblognorm_la-samp.lo `test -f 'samp.c' || echo '$(srcdir)/'`samp.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-samp.Tpo $(DEPDIR)/liblognorm_la-samp.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='samp.c' object='liblognorm_la-samp.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-samp.lo `test -f 'samp.c' || echo '$(srcdir)/'`samp.c
liblognorm_la-lognorm.lo: lognorm.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-lognorm.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-lognorm.Tpo -c -o liblognorm_la-lognorm.lo `test -f 'lognorm.c' || echo '$(srcdir)/'`lognorm.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-lognorm.Tpo $(DEPDIR)/liblognorm_la-lognorm.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='lognorm.c' object='liblognorm_la-lognorm.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-lognorm.lo `test -f 'lognorm.c' || echo '$(srcdir)/'`lognorm.c
liblognorm_la-parser.lo: parser.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-parser.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-parser.Tpo -c -o liblognorm_la-parser.lo `test -f 'parser.c' || echo '$(srcdir)/'`parser.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-parser.Tpo $(DEPDIR)/liblognorm_la-parser.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='parser.c' object='liblognorm_la-parser.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-parser.lo `test -f 'parser.c' || echo '$(srcdir)/'`parser.c
liblognorm_la-enc_syslog.lo: enc_syslog.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-enc_syslog.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-enc_syslog.Tpo -c -o liblognorm_la-enc_syslog.lo `test -f 'enc_syslog.c' || echo '$(srcdir)/'`enc_syslog.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-enc_syslog.Tpo $(DEPDIR)/liblognorm_la-enc_syslog.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='enc_syslog.c' object='liblognorm_la-enc_syslog.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-enc_syslog.lo `test -f 'enc_syslog.c' || echo '$(srcdir)/'`enc_syslog.c
liblognorm_la-enc_csv.lo: enc_csv.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-enc_csv.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-enc_csv.Tpo -c -o liblognorm_la-enc_csv.lo `test -f 'enc_csv.c' || echo '$(srcdir)/'`enc_csv.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-enc_csv.Tpo $(DEPDIR)/liblognorm_la-enc_csv.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='enc_csv.c' object='liblognorm_la-enc_csv.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-enc_csv.lo `test -f 'enc_csv.c' || echo '$(srcdir)/'`enc_csv.c
liblognorm_la-enc_xml.lo: enc_xml.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-enc_xml.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-enc_xml.Tpo -c -o liblognorm_la-enc_xml.lo `test -f 'enc_xml.c' || echo '$(srcdir)/'`enc_xml.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-enc_xml.Tpo $(DEPDIR)/liblognorm_la-enc_xml.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='enc_xml.c' object='liblognorm_la-enc_xml.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-enc_xml.lo `test -f 'enc_xml.c' || echo '$(srcdir)/'`enc_xml.c
liblognorm_la-v1_liblognorm.lo: v1_liblognorm.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-v1_liblognorm.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-v1_liblognorm.Tpo -c -o liblognorm_la-v1_liblognorm.lo `test -f 'v1_liblognorm.c' || echo '$(srcdir)/'`v1_liblognorm.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-v1_liblognorm.Tpo $(DEPDIR)/liblognorm_la-v1_liblognorm.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='v1_liblognorm.c' object='liblognorm_la-v1_liblognorm.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-v1_liblognorm.lo `test -f 'v1_liblognorm.c' || echo '$(srcdir)/'`v1_liblognorm.c
liblognorm_la-v1_parser.lo: v1_parser.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-v1_parser.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-v1_parser.Tpo -c -o liblognorm_la-v1_parser.lo `test -f 'v1_parser.c' || echo '$(srcdir)/'`v1_parser.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-v1_parser.Tpo $(DEPDIR)/liblognorm_la-v1_parser.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='v1_parser.c' object='liblognorm_la-v1_parser.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-v1_parser.lo `test -f 'v1_parser.c' || echo '$(srcdir)/'`v1_parser.c
liblognorm_la-v1_ptree.lo: v1_ptree.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-v1_ptree.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-v1_ptree.Tpo -c -o liblognorm_la-v1_ptree.lo `test -f 'v1_ptree.c' || echo '$(srcdir)/'`v1_ptree.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-v1_ptree.Tpo $(DEPDIR)/liblognorm_la-v1_ptree.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='v1_ptree.c' object='liblognorm_la-v1_ptree.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-v1_ptree.lo `test -f 'v1_ptree.c' || echo '$(srcdir)/'`v1_ptree.c
liblognorm_la-v1_samp.lo: v1_samp.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT liblognorm_la-v1_samp.lo -MD -MP -MF $(DEPDIR)/liblognorm_la-v1_samp.Tpo -c -o liblognorm_la-v1_samp.lo `test -f 'v1_samp.c' || echo '$(srcdir)/'`v1_samp.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/liblognorm_la-v1_samp.Tpo $(DEPDIR)/liblognorm_la-v1_samp.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='v1_samp.c' object='liblognorm_la-v1_samp.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(liblognorm_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o liblognorm_la-v1_samp.lo `test -f 'v1_samp.c' || echo '$(srcdir)/'`v1_samp.c
ln_test-lognormalizer.o: lognormalizer.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ln_test_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT ln_test-lognormalizer.o -MD -MP -MF $(DEPDIR)/ln_test-lognormalizer.Tpo -c -o ln_test-lognormalizer.o `test -f 'lognormalizer.c' || echo '$(srcdir)/'`lognormalizer.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/ln_test-lognormalizer.Tpo $(DEPDIR)/ln_test-lognormalizer.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='lognormalizer.c' object='ln_test-lognormalizer.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ln_test_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o ln_test-lognormalizer.o `test -f 'lognormalizer.c' || echo '$(srcdir)/'`lognormalizer.c
ln_test-lognormalizer.obj: lognormalizer.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ln_test_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT ln_test-lognormalizer.obj -MD -MP -MF $(DEPDIR)/ln_test-lognormalizer.Tpo -c -o ln_test-lognormalizer.obj `if test -f 'lognormalizer.c'; then $(CYGPATH_W) 'lognormalizer.c'; else $(CYGPATH_W) '$(srcdir)/lognormalizer.c'; fi`
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/ln_test-lognormalizer.Tpo $(DEPDIR)/ln_test-lognormalizer.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='lognormalizer.c' object='ln_test-lognormalizer.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(ln_test_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o ln_test-lognormalizer.obj `if test -f 'lognormalizer.c'; then $(CYGPATH_W) 'lognormalizer.c'; else $(CYGPATH_W) '$(srcdir)/lognormalizer.c'; fi`
lognormalizer-lognormalizer.o: lognormalizer.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(lognormalizer_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT lognormalizer-lognormalizer.o -MD -MP -MF $(DEPDIR)/lognormalizer-lognormalizer.Tpo -c -o lognormalizer-lognormalizer.o `test -f 'lognormalizer.c' || echo '$(srcdir)/'`lognormalizer.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/lognormalizer-lognormalizer.Tpo $(DEPDIR)/lognormalizer-lognormalizer.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='lognormalizer.c' object='lognormalizer-lognormalizer.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(lognormalizer_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o lognormalizer-lognormalizer.o `test -f 'lognormalizer.c' || echo '$(srcdir)/'`lognormalizer.c
lognormalizer-lognormalizer.obj: lognormalizer.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(lognormalizer_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT lognormalizer-lognormalizer.obj -MD -MP -MF $(DEPDIR)/lognormalizer-lognormalizer.Tpo -c -o lognormalizer-lognormalizer.obj `if test -f 'lognormalizer.c'; then $(CYGPATH_W) 'lognormalizer.c'; else $(CYGPATH_W) '$(srcdir)/lognormalizer.c'; fi`
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/lognormalizer-lognormalizer.Tpo $(DEPDIR)/lognormalizer-lognormalizer.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='lognormalizer.c' object='lognormalizer-lognormalizer.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(lognormalizer_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o lognormalizer-lognormalizer.obj `if test -f 'lognormalizer.c'; then $(CYGPATH_W) 'lognormalizer.c'; else $(CYGPATH_W) '$(srcdir)/lognormalizer.c'; fi`
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
install-includeHEADERS: $(include_HEADERS)
@$(NORMAL_INSTALL)
@list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \
if test -n "$$list"; then \
echo " $(MKDIR_P) '$(DESTDIR)$(includedir)'"; \
$(MKDIR_P) "$(DESTDIR)$(includedir)" || exit 1; \
fi; \
for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
echo "$$d$$p"; \
done | $(am__base_list) | \
while read files; do \
echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(includedir)'"; \
$(INSTALL_HEADER) $$files "$(DESTDIR)$(includedir)" || exit $$?; \
done
uninstall-includeHEADERS:
@$(NORMAL_UNINSTALL)
@list='$(include_HEADERS)'; test -n "$(includedir)" || list=; \
files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \
dir='$(DESTDIR)$(includedir)'; $(am__uninstall_files_from_dir)
ID: $(am__tagged_files)
$(am__define_uniq_tagged_files); mkid -fID $$unique
tags: tags-am
TAGS: tags
tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
set x; \
here=`pwd`; \
$(am__define_uniq_tagged_files); \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: ctags-am
CTAGS: ctags
ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
$(am__define_uniq_tagged_files); \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscopelist: cscopelist-am
cscopelist-am: $(am__tagged_files)
list='$(am__tagged_files)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
check-am: all-am
$(MAKE) $(AM_MAKEFLAGS) $(check_PROGRAMS)
check: check-am
all-am: Makefile $(LTLIBRARIES) $(PROGRAMS) $(HEADERS)
install-binPROGRAMS: install-libLTLIBRARIES
installdirs:
for dir in "$(DESTDIR)$(libdir)" "$(DESTDIR)$(bindir)" "$(DESTDIR)$(includedir)"; do \
test -z "$$dir" || $(MKDIR_P) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-binPROGRAMS clean-checkPROGRAMS clean-generic \
clean-libLTLIBRARIES clean-libtool mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am: install-includeHEADERS
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am: install-binPROGRAMS install-libLTLIBRARIES
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man:
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-binPROGRAMS uninstall-includeHEADERS \
uninstall-libLTLIBRARIES
.MAKE: check-am install-am install-strip
.PHONY: CTAGS GTAGS TAGS all all-am check check-am clean \
clean-binPROGRAMS clean-checkPROGRAMS clean-generic \
clean-libLTLIBRARIES clean-libtool cscopelist-am ctags \
ctags-am distclean distclean-compile distclean-generic \
distclean-libtool distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-binPROGRAMS \
install-data install-data-am install-dvi install-dvi-am \
install-exec install-exec-am install-html install-html-am \
install-includeHEADERS install-info install-info-am \
install-libLTLIBRARIES install-man install-pdf install-pdf-am \
install-ps install-ps-am install-strip installcheck \
installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
tags tags-am uninstall uninstall-am uninstall-binPROGRAMS \
uninstall-includeHEADERS uninstall-libLTLIBRARIES
.PRECIOUS: Makefile
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:
liblognorm-2.0.6/src/v1_ptree.c 0000644 0001750 0001750 00000062221 13273030617 013271 0000000 0000000 /**
* @file ptree.c
* @brief Implementation of the parse tree object.
* @class ln_ptree ptree.h
*//*
* Copyright 2010 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include
#include
#define LOGNORM_V1_SUBSYSTEM /* indicate we are old cruft */
#include "v1_liblognorm.h"
#include "annot.h"
#include "internal.h"
#include "lognorm.h"
#include "v1_ptree.h"
#include "v1_samp.h"
#include "v1_parser.h"
/**
* Get base addr of common prefix. Takes length of prefix in account
* and selects the right buffer.
*/
static inline unsigned char*
prefixBase(struct ln_ptree *tree)
{
return (tree->lenPrefix <= sizeof(tree->prefix))
? tree->prefix.data : tree->prefix.ptr;
}
struct ln_ptree*
ln_newPTree(ln_ctx ctx, struct ln_ptree **parentptr)
{
struct ln_ptree *tree;
if((tree = calloc(1, sizeof(struct ln_ptree))) == NULL)
goto done;
tree->parentptr = parentptr;
tree->ctx = ctx;
ctx->nNodes++;
done: return tree;
}
void
ln_deletePTreeNode(ln_fieldList_t *node)
{
ln_deletePTree(node->subtree);
es_deleteStr(node->name);
if(node->data != NULL)
es_deleteStr(node->data);
if(node->raw_data != NULL)
es_deleteStr(node->raw_data);
if(node->parser_data != NULL && node->parser_data_destructor != NULL)
node->parser_data_destructor(&(node->parser_data));
free(node);
}
void
ln_deletePTree(struct ln_ptree *tree)
{
ln_fieldList_t *node, *nextnode;
size_t i;
if(tree == NULL)
goto done;
if(tree->tags != NULL)
json_object_put(tree->tags);
for(node = tree->froot; node != NULL; node = nextnode) {
nextnode = node->next;
ln_deletePTreeNode(node);
}
/* need to free a large prefix buffer? */
if(tree->lenPrefix > sizeof(tree->prefix))
free(tree->prefix.ptr);
for(i = 0 ; i < 256 ; ++i)
if(tree->subtree[i] != NULL)
ln_deletePTree(tree->subtree[i]);
free(tree);
done: return;
}
/**
* Set the common prefix inside a note, taking into account the subtle
* issues associated with it.
* @return 0 on success, something else otherwise
*/
static int
setPrefix(struct ln_ptree *tree, unsigned char *buf, size_t lenBuf, size_t offs)
{
int r;
LN_DBGPRINTF(tree->ctx, "setPrefix lenBuf %zu, offs %zu", lenBuf, offs);
tree->lenPrefix = lenBuf - offs;
if(tree->lenPrefix > sizeof(tree->prefix)) {
/* too-large for standard buffer, need to alloc one */
if((tree->prefix.ptr = malloc(tree->lenPrefix * sizeof(unsigned char))) == NULL) {
r = LN_NOMEM;
goto done; /* fail! */
}
memcpy(tree->prefix.ptr, buf, tree->lenPrefix);
} else {
/* note: r->lenPrefix may be 0, but that is OK */
memcpy(tree->prefix.data, buf, tree->lenPrefix);
}
r = 0;
done: return r;
}
/**
* Check if the provided tree is a leaf. This means that it
* does not contain any subtrees.
* @return 1 if it is a leaf, 0 otherwise
*/
static int
isLeaf(struct ln_ptree *tree)
{
int r = 0;
int i;
if(tree->froot != NULL)
goto done;
for(i = 0 ; i < 256 ; ++i) {
if(tree->subtree[i] != NULL)
goto done;
}
r = 1;
done: return r;
}
/**
* Check if the provided tree is a true leaf. This means that it
* does not contain any subtrees of any kind and no prefix,
* and it is not terminal leaf.
* @return 1 if it is a leaf, 0 otherwise
*/
static inline int
isTrueLeaf(struct ln_ptree *tree)
{
return((tree->lenPrefix == 0) && isLeaf(tree)) && !tree->flags.isTerminal;
}
struct ln_ptree *
ln_addPTree(struct ln_ptree *tree, es_str_t *str, size_t offs)
{
struct ln_ptree *r;
struct ln_ptree **parentptr; /**< pointer in parent that needs to be updated */
LN_DBGPRINTF(tree->ctx, "addPTree: offs %zu", offs);
parentptr = &(tree->subtree[es_getBufAddr(str)[offs]]);
/* First check if tree node is totaly empty. If so, we can simply add
* the prefix to this node. This case is important, because it happens
* every time with a new field.
*/
if(isTrueLeaf(tree)) {
if(setPrefix(tree, es_getBufAddr(str), es_strlen(str), offs) != 0) {
r = NULL;
} else {
r = tree;
}
goto done;
}
if(tree->ctx->debug) {
char *cstr = es_str2cstr(str, NULL);
LN_DBGPRINTF(tree->ctx, "addPTree: add '%s', offs %zu, tree %p",
cstr + offs, offs, tree);
free(cstr);
}
if((r = ln_newPTree(tree->ctx, parentptr)) == NULL)
goto done;
if(setPrefix(r, es_getBufAddr(str) + offs + 1, es_strlen(str) - offs - 1, 0) != 0) {
free(r);
r = NULL;
goto done;
}
*parentptr = r;
done: return r;
}
/**
* Split the provided tree (node) into two at the provided index into its
* common prefix. This function exists to support splitting nodes when
* a mismatch in the common prefix requires that. This function more or less
* keeps the tree as it is, just changes the structure. No new node is added.
* Usually, it is desired to add a new node. This must be made afterwards.
* Note that we need to create a new tree *in front of* the current one, as
* the current one contains field etc. subtree pointers.
* @param[in] tree tree to split
* @param[in] offs offset into common prefix (must be less than prefix length!)
*/
static struct ln_ptree*
splitTree(struct ln_ptree *tree, unsigned short offs)
{
unsigned char *c;
struct ln_ptree *r;
unsigned short newlen;
ln_ptree **newparentptr; /**< pointer in parent that needs to be updated */
assert(offs < tree->lenPrefix);
if((r = ln_newPTree(tree->ctx, tree->parentptr)) == NULL)
goto done;
LN_DBGPRINTF(tree->ctx, "splitTree %p at offs %u", tree, offs);
/* note: the overall prefix is reduced by one char, which is now taken
* care of inside the "branch table".
*/
c = prefixBase(tree);
//LN_DBGPRINTF(tree->ctx, "splitTree new bb, *(c+offs): '%s'", c);
if(setPrefix(r, c, offs, 0) != 0) {
ln_deletePTree(r);
r = NULL;
goto done; /* fail! */
}
LN_DBGPRINTF(tree->ctx, "splitTree new tree %p lenPrefix=%u, char '%c'", r, r->lenPrefix, r->prefix.data[0]);
/* add the proper branch table entry for the new node. must be done
* here, because the next step will destroy the required index char!
*/
newparentptr = &(r->subtree[c[offs]]);
r->subtree[c[offs]] = tree;
/* finally fix existing common prefix */
newlen = tree->lenPrefix - offs - 1;
if(tree->lenPrefix > sizeof(tree->prefix) && (newlen <= sizeof(tree->prefix))) {
/* note: c is a different pointer; the original
* pointer is overwritten by memcpy! */
LN_DBGPRINTF(tree->ctx, "splitTree new case one bb, offs %u, lenPrefix %u, newlen %u", offs, tree->lenPrefix, newlen);
//LN_DBGPRINTF(tree->ctx, "splitTree new case one bb, *(c+offs): '%s'", c);
memcpy(tree->prefix.data, c+offs+1, newlen);
free(c);
} else {
LN_DBGPRINTF(tree->ctx, "splitTree new case two bb, offs=%u, newlen %u", offs, newlen);
memmove(c, c+offs+1, newlen);
}
tree->lenPrefix = tree->lenPrefix - offs - 1;
if(tree->parentptr == 0)
tree->ctx->ptree = r; /* root does not have a parent! */
else
*(tree->parentptr) = r;
tree->parentptr = newparentptr;
done: return r;
}
struct ln_ptree *
ln_buildPTree(struct ln_ptree *tree, es_str_t *str, size_t offs)
{
struct ln_ptree *r;
unsigned char *c;
unsigned char *cpfix;
size_t i;
unsigned short ipfix;
assert(tree != NULL);
LN_DBGPRINTF(tree->ctx, "buildPTree: begin at %p, offs %zu", tree, offs);
c = es_getBufAddr(str);
/* check if the prefix matches and, if not, at what offset it is different */
ipfix = 0;
cpfix = prefixBase(tree);
for( i = offs
; (i < es_strlen(str)) && (ipfix < tree->lenPrefix) && (c[i] == cpfix[ipfix])
; ++i, ++ipfix) {
; /*DO NOTHING - just find end of match */
LN_DBGPRINTF(tree->ctx, "buildPTree: tree %p, i %zu, char '%c'", tree, i, c[i]);
}
/* if we reach this point, we have processed as much of the common prefix
* as we could. The following code now does the proper actions based on
* the possible cases.
*/
if(i == es_strlen(str)) {
/* all of our input is consumed, no more recursion */
if(ipfix == tree->lenPrefix) {
LN_DBGPRINTF(tree->ctx, "case 1.1");
/* exact match, we are done! */
r = tree;
} else {
LN_DBGPRINTF(tree->ctx, "case 1.2");
/* we need to split the node at the current position */
r = splitTree(tree, ipfix);
}
} else if(ipfix < tree->lenPrefix) {
LN_DBGPRINTF(tree->ctx, "case 2, i=%zu, ipfix=%u", i, ipfix);
/* we need to split the node at the current position */
if((r = splitTree(tree, ipfix)) == NULL)
goto done; /* fail */
LN_DBGPRINTF(tree->ctx, "pre addPTree: i %zu", i);
if((r = ln_addPTree(r, str, i)) == NULL)
goto done;
//r = ln_buildPTree(r, str, i + 1);
} else {
/* we could consume the current common prefix, but now need
* to traverse the rest of the tree based on the next char.
*/
if(tree->subtree[c[i]] == NULL) {
LN_DBGPRINTF(tree->ctx, "case 3.1");
/* non-match, need new subtree */
r = ln_addPTree(tree, str, i);
} else {
LN_DBGPRINTF(tree->ctx, "case 3.2");
/* match, follow subtree */
r = ln_buildPTree(tree->subtree[c[i]], str, i + 1);
}
}
//LN_DBGPRINTF(tree->ctx, "---------------------------------------");
//ln_displayPTree(tree, 0);
//LN_DBGPRINTF(tree->ctx, "=======================================");
done: return r;
}
int
ln_addFDescrToPTree(struct ln_ptree **tree, ln_fieldList_t *node)
{
int r;
ln_fieldList_t *curr;
assert(tree != NULL);assert(*tree != NULL);
assert(node != NULL);
if((node->subtree = ln_newPTree((*tree)->ctx, &node->subtree)) == NULL) {
r = -1;
goto done;
}
LN_DBGPRINTF((*tree)->ctx, "got new subtree %p", node->subtree);
/* check if we already have this field, if so, merge
* TODO: optimized, check logic
*/
for(curr = (*tree)->froot ; curr != NULL ; curr = curr->next) {
if(!es_strcmp(curr->name, node->name)
&& curr->parser == node->parser
&& ((curr->raw_data == NULL && node->raw_data == NULL)
|| (curr->raw_data != NULL && node->raw_data != NULL
&& !es_strcmp(curr->raw_data, node->raw_data)))) {
*tree = curr->subtree;
ln_deletePTreeNode(node);
r = 0;
LN_DBGPRINTF((*tree)->ctx, "merging with tree %p\n", *tree);
goto done;
}
}
if((*tree)->froot == NULL) {
(*tree)->froot = (*tree)->ftail = node;
} else {
(*tree)->ftail->next = node;
(*tree)->ftail = node;
}
r = 0;
LN_DBGPRINTF((*tree)->ctx, "prev subtree %p", *tree);
*tree = node->subtree;
LN_DBGPRINTF((*tree)->ctx, "new subtree %p", *tree);
done: return r;
}
void
ln_displayPTree(struct ln_ptree *tree, int level)
{
int i;
int nChildLit;
int nChildField;
es_str_t *str;
char *cstr;
ln_fieldList_t *node;
char indent[2048];
if(level > 1023)
level = 1023;
memset(indent, ' ', level * 2);
indent[level * 2] = '\0';
nChildField = 0;
for(node = tree->froot ; node != NULL ; node = node->next ) {
++nChildField;
}
nChildLit = 0;
for(i = 0 ; i < 256 ; ++i) {
if(tree->subtree[i] != NULL) {
nChildLit++;
}
}
str = es_newStr(sizeof(tree->prefix));
es_addBuf(&str, (char*) prefixBase(tree), tree->lenPrefix);
cstr = es_str2cstr(str, NULL);
es_deleteStr(str);
LN_DBGPRINTF(tree->ctx, "%ssubtree%s %p (prefix: '%s', children: %d literals, %d fields) [visited %u "
"backtracked %u terminated %u]",
indent, tree->flags.isTerminal ? " TERM" : "", tree, cstr, nChildLit, nChildField,
tree->stats.visited, tree->stats.backtracked, tree->stats.terminated);
free(cstr);
/* display char subtrees */
for(i = 0 ; i < 256 ; ++i) {
if(tree->subtree[i] != NULL) {
LN_DBGPRINTF(tree->ctx, "%schar %2.2x(%c):", indent, i, i);
ln_displayPTree(tree->subtree[i], level + 1);
}
}
/* display field subtrees */
for(node = tree->froot ; node != NULL ; node = node->next ) {
cstr = es_str2cstr(node->name, NULL);
LN_DBGPRINTF(tree->ctx, "%sfield %s:", indent, cstr);
free(cstr);
ln_displayPTree(node->subtree, level + 1);
}
}
/* the following is a quick hack, which should be moved to the
* string class.
*/
static inline void dotAddPtr(es_str_t **str, void *p)
{
char buf[64];
int i;
i = snprintf(buf, sizeof(buf), "%p", p);
es_addBuf(str, buf, i);
}
/**
* recursive handler for DOT graph generator.
*/
static void
ln_genDotPTreeGraphRec(struct ln_ptree *tree, es_str_t **str)
{
int i;
ln_fieldList_t *node;
dotAddPtr(str, tree);
es_addBufConstcstr(str, " [label=\"");
if(tree->lenPrefix > 0) {
es_addChar(str, '\'');
es_addBuf(str, (char*) prefixBase(tree), tree->lenPrefix);
es_addChar(str, '\'');
}
es_addBufConstcstr(str, "\"");
if(isLeaf(tree)) {
es_addBufConstcstr(str, " style=\"bold\"");
}
es_addBufConstcstr(str, "]\n");
/* display char subtrees */
for(i = 0 ; i < 256 ; ++i) {
if(tree->subtree[i] != NULL) {
dotAddPtr(str, tree);
es_addBufConstcstr(str, " -> ");
dotAddPtr(str, tree->subtree[i]);
es_addBufConstcstr(str, " [label=\"");
es_addChar(str, (char) i);
es_addBufConstcstr(str, "\"]\n");
ln_genDotPTreeGraphRec(tree->subtree[i], str);
}
}
/* display field subtrees */
for(node = tree->froot ; node != NULL ; node = node->next ) {
dotAddPtr(str, tree);
es_addBufConstcstr(str, " -> ");
dotAddPtr(str, node->subtree);
es_addBufConstcstr(str, " [label=\"");
es_addStr(str, node->name);
es_addBufConstcstr(str, "\" style=\"dotted\"]\n");
ln_genDotPTreeGraphRec(node->subtree, str);
}
}
void
ln_genDotPTreeGraph(struct ln_ptree *tree, es_str_t **str)
{
es_addBufConstcstr(str, "digraph ptree {\n");
ln_genDotPTreeGraphRec(tree, str);
es_addBufConstcstr(str, "}\n");
}
/**
* add unparsed string to event.
*/
static int
addUnparsedField(const char *str, size_t strLen, int offs, struct json_object *json)
{
int r = 1;
struct json_object *value;
char *s = NULL;
CHKN(s = strndup(str, strLen));
value = json_object_new_string(s);
if (value == NULL) {
goto done;
}
json_object_object_add(json, ORIGINAL_MSG_KEY, value);
value = json_object_new_string(s + offs);
if (value == NULL) {
goto done;
}
json_object_object_add(json, UNPARSED_DATA_KEY, value);
r = 0;
done:
free(s);
return r;
}
/**
* Special parser for iptables-like name/value pairs.
* The pull multiple fields. Note that once this parser has been selected,
* it is very unlikely to be left, as it is *very* generic. This parser is
* required because practice shows that already-structured data like iptables
* can otherwise not be processed by liblognorm in a meaningful way.
*
* @param[in] tree current tree to process
* @param[in] str string to be matched against (the to-be-normalized data)
* @param[in] strLen length of str
* @param[in/out] offs start position in input data, on exit first unparsed position
* @param[in/out] event handle to event that is being created during normalization
*
* @return 0 if parser was successfully, something else on error
*/
static int
ln_iptablesParser(struct ln_ptree *tree, const char *str, size_t strLen, size_t *offs,
struct json_object *json)
{
int r;
size_t o = *offs;
es_str_t *fname;
es_str_t *fval;
const char *pstr;
const char *end;
struct json_object *value;
LN_DBGPRINTF(tree->ctx, "%zu enter iptables parser, len %zu", *offs, strLen);
if(o == strLen) {
r = -1; /* can not be, we have no n/v pairs! */
goto done;
}
end = str + strLen;
pstr = str + o;
while(pstr < end) {
while(pstr < end && isspace(*pstr))
++pstr;
CHKN(fname = es_newStr(16));
while(pstr < end && !isspace(*pstr) && *pstr != '=') {
es_addChar(&fname, *pstr);
++pstr;
}
if(pstr < end && *pstr == '=') {
CHKN(fval = es_newStr(16));
++pstr;
/* error on space */
while(pstr < end && !isspace(*pstr)) {
es_addChar(&fval, *pstr);
++pstr;
}
} else {
CHKN(fval = es_newStrFromCStr("[*PRESENT*]",
sizeof("[*PRESENT*]")-1));
}
char *cn, *cv;
CHKN(cn = ln_es_str2cstr(&fname));
CHKN(cv = ln_es_str2cstr(&fval));
if (tree->ctx->debug) {
LN_DBGPRINTF(tree->ctx, "iptables parser extracts %s=%s", cn, cv);
}
CHKN(value = json_object_new_string(cv));
json_object_object_add(json, cn, value);
es_deleteStr(fval);
es_deleteStr(fname);
}
r = 0;
*offs = strLen;
done:
LN_DBGPRINTF(tree->ctx, "%zu iptables parser returns %d", *offs, r);
return r;
}
/**
* Recursive step of the normalizer. It walks the parse tree and calls itself
* recursively when this is appropriate. It also implements backtracking in
* those (hopefully rare) cases where it is required.
*
* @param[in] tree current tree to process
* @param[in] string string to be matched against (the to-be-normalized data)
* @param[in] strLen length of the to-be-matched string
* @param[in] offs start position in input data
* @param[in/out] json ... that is being created during normalization
* @param[out] endNode if a match was found, this is the matching node (undefined otherwise)
*
* @return number of characters left unparsed by following the subtree, negative if
* the to-be-parsed message is shorter than the rule sample by this number of
* characters.
*/
static int
ln_v1_normalizeRec(struct ln_ptree *tree, const char *str, size_t strLen, size_t offs, struct json_object *json,
struct ln_ptree **endNode)
{
int r;
int localR;
size_t i;
int left;
ln_fieldList_t *node;
ln_fieldList_t *restMotifNode = NULL;
char *cstr;
const char *c;
unsigned char *cpfix;
unsigned ipfix;
size_t parsed;
char *namestr;
struct json_object *value;
++tree->stats.visited;
if(offs >= strLen) {
*endNode = tree;
r = -tree->lenPrefix;
goto done;
}
LN_DBGPRINTF(tree->ctx, "%zu: enter parser, tree %p", offs, tree);
c = str;
cpfix = prefixBase(tree);
node = tree->froot;
r = strLen - offs;
/* first we need to check if the common prefix matches (and consume input data while we do) */
ipfix = 0;
while(offs < strLen && ipfix < tree->lenPrefix) {
LN_DBGPRINTF(tree->ctx, "%zu: prefix compare '%c', '%c'", offs, c[offs], cpfix[ipfix]);
if(c[offs] != cpfix[ipfix]) {
r -= ipfix;
goto done;
}
++offs, ++ipfix;
}
if(ipfix != tree->lenPrefix) {
/* incomplete prefix match --> to-be-normalized string too short */
r = ipfix - tree->lenPrefix;
goto done;
}
r -= ipfix;
LN_DBGPRINTF(tree->ctx, "%zu: prefix compare succeeded, still valid", offs);
/* now try the parsers */
while(node != NULL) {
if(tree->ctx->debug) {
cstr = es_str2cstr(node->name, NULL);
LN_DBGPRINTF(tree->ctx, "%zu:trying parser for field '%s': %p",
offs, cstr, node->parser);
free(cstr);
}
i = offs;
if(node->isIPTables) {
localR = ln_iptablesParser(tree, str, strLen, &i, json);
LN_DBGPRINTF(tree->ctx, "%zu iptables parser return, i=%zu",
offs, i);
if(localR == 0) {
/* potential hit, need to verify */
LN_DBGPRINTF(tree->ctx, "potential hit, trying subtree");
left = ln_v1_normalizeRec(node->subtree, str, strLen, i, json, endNode);
if(left == 0 && (*endNode)->flags.isTerminal) {
LN_DBGPRINTF(tree->ctx, "%zu: parser matches at %zu", offs, i);
r = 0;
goto done;
}
LN_DBGPRINTF(tree->ctx, "%zu nonmatch, backtracking required, left=%d",
offs, left);
++tree->stats.backtracked;
if(left < r)
r = left;
}
} else if(node->parser == ln_parseRest) {
/* This is a quick and dirty adjustment to handle "rest" more intelligently.
* It's just a tactical fix: in the longer term, we'll handle the whole
* situation differently. However, it makes sense to fix this now, as this
* solves some real-world problems immediately. -- rgerhards, 2015-04-15
*/
restMotifNode = node;
} else {
value = NULL;
localR = node->parser(str, strLen, &i, node, &parsed, &value);
LN_DBGPRINTF(tree->ctx, "parser returns %d, parsed %zu", localR, parsed);
if(localR == 0) {
/* potential hit, need to verify */
LN_DBGPRINTF(tree->ctx, "%zu: potential hit, trying subtree %p", offs, node->subtree);
left = ln_v1_normalizeRec(node->subtree, str, strLen, i + parsed, json, endNode);
LN_DBGPRINTF(tree->ctx, "%zu: subtree returns %d", offs, r);
if(left == 0 && (*endNode)->flags.isTerminal) {
LN_DBGPRINTF(tree->ctx, "%zu: parser matches at %zu", offs, i);
if(es_strbufcmp(node->name, (unsigned char*)"-", 1)) {
/* Store the value here; create json if not already created */
if (value == NULL) {
CHKN(cstr = strndup(str + i, parsed));
value = json_object_new_string(cstr);
free(cstr);
}
if (value == NULL) {
LN_DBGPRINTF(tree->ctx, "unable to create json");
goto done;
}
namestr = ln_es_str2cstr(&node->name);
json_object_object_add(json, namestr, value);
} else {
if (value != NULL) {
/* Free the unneeded value */
json_object_put(value);
}
}
r = 0;
goto done;
}
LN_DBGPRINTF(tree->ctx, "%zu nonmatch, backtracking required, left=%d",
offs, left);
if (value != NULL) {
/* Free the value if it was created */
json_object_put(value);
}
if(left > 0 && left < r)
r = left;
LN_DBGPRINTF(tree->ctx, "%zu nonmatch, backtracking required, left=%d, r now %d",
offs, left, r);
++tree->stats.backtracked;
}
}
node = node->next;
}
if(offs == strLen) {
*endNode = tree;
r = 0;
goto done;
}
if(offs < strLen) {
unsigned char cc = str[offs];
LN_DBGPRINTF(tree->ctx, "%zu no field, trying subtree char '%c': %p", offs, cc, tree->subtree[cc]);
} else {
LN_DBGPRINTF(tree->ctx, "%zu no field, offset already beyond end", offs);
}
/* now let's see if we have a literal */
if(tree->subtree[(unsigned char)str[offs]] != NULL) {
left = ln_v1_normalizeRec(tree->subtree[(unsigned char)str[offs]],
str, strLen, offs + 1, json, endNode);
LN_DBGPRINTF(tree->ctx, "%zu got left %d, r %d", offs, left, r);
if(left < r)
r = left;
LN_DBGPRINTF(tree->ctx, "%zu got return %d", offs, r);
}
if(r == 0 && (*endNode)->flags.isTerminal)
goto done;
/* and finally give "rest" a try if it was present. Note that we MUST do this after
* literal evaluation, otherwise "rest" can never be overriden by other rules.
*/
if(restMotifNode != NULL) {
LN_DBGPRINTF(tree->ctx, "rule has rest motif, forcing match via it");
value = NULL;
restMotifNode->parser(str, strLen, &i, restMotifNode, &parsed, &value);
# ifndef NDEBUG
left = /* we only need this for the assert below */
# endif
ln_v1_normalizeRec(restMotifNode->subtree, str, strLen, i + parsed, json, endNode);
assert(left == 0); /* with rest, we have this invariant */
assert((*endNode)->flags.isTerminal); /* this one also */
LN_DBGPRINTF(tree->ctx, "%zu: parser matches at %zu", offs, i);
if(es_strbufcmp(restMotifNode->name, (unsigned char*)"-", 1)) {
/* Store the value here; create json if not already created */
if (value == NULL) {
CHKN(cstr = strndup(str + i, parsed));
value = json_object_new_string(cstr);
free(cstr);
}
if (value == NULL) {
LN_DBGPRINTF(tree->ctx, "unable to create json");
goto done;
}
namestr = ln_es_str2cstr(&restMotifNode->name);
json_object_object_add(json, namestr, value);
} else {
if (value != NULL) {
/* Free the unneeded value */
json_object_put(value);
}
}
r = 0;
goto done;
}
done:
LN_DBGPRINTF(tree->ctx, "%zu returns %d", offs, r);
if(r == 0 && *endNode == tree)
++tree->stats.terminated;
return r;
}
int
ln_v1_normalize(ln_ctx ctx, const char *str, size_t strLen, struct json_object **json_p)
{
int r;
int left;
struct ln_ptree *endNode = NULL;
if(*json_p == NULL) {
CHKN(*json_p = json_object_new_object());
}
left = ln_v1_normalizeRec(ctx->ptree, str, strLen, 0, *json_p, &endNode);
if(ctx->debug) {
if(left == 0) {
LN_DBGPRINTF(ctx, "final result for normalizer: left %d, endNode %p, "
"isTerminal %d, tagbucket %p",
left, endNode, endNode->flags.isTerminal, endNode->tags);
} else {
LN_DBGPRINTF(ctx, "final result for normalizer: left %d, endNode %p",
left, endNode);
}
}
if(left != 0 || !endNode->flags.isTerminal) {
/* we could not successfully parse, some unparsed items left */
if(left < 0) {
addUnparsedField(str, strLen, strLen, *json_p);
} else {
addUnparsedField(str, strLen, strLen - left, *json_p);
}
} else {
/* success, finalize event */
if(endNode->tags != NULL) {
/* add tags to an event */
json_object_get(endNode->tags);
json_object_object_add(*json_p, "event.tags", endNode->tags);
CHKR(ln_annotate(ctx, *json_p, endNode->tags));
}
}
r = 0;
done: return r;
}
/**
* Gather and output pdag statistics for the full pdag (ctx)
* including all disconnected components (type defs).
*
* Data is sent to given file ptr.
*/
void
ln_fullPTreeStats(ln_ctx ctx, FILE __attribute__((unused)) *const fp,
const int __attribute__((unused)) extendedStats)
{
ln_displayPTree(ctx->ptree, 0);
}
liblognorm-2.0.6/src/enc.h 0000644 0001750 0001750 00000002636 13273030617 012322 0000000 0000000 /**
* @file enc.h
* @brief Encoder functions
*/
/*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#ifndef LIBLOGNORM_ENC_H_INCLUDED
#define LIBLOGNORM_ENC_H_INCLUDED
int ln_fmtEventToRFC5424(struct json_object *json, es_str_t **str);
int ln_fmtEventToCSV(struct json_object *json, es_str_t **str, es_str_t *extraData);
int ln_fmtEventToXML(struct json_object *json, es_str_t **str);
#endif /* LIBLOGNORM_ENC_H_INCLUDED */
liblognorm-2.0.6/src/enc_xml.c 0000644 0001750 0001750 00000013104 13273030617 013165 0000000 0000000 /**
* @file enc-xml.c
* Encoder for XML format.
*
* This file contains code from all related objects that is required in
* order to encode this format. The core idea of putting all of this into
* a single file is that this makes it very straightforward to write
* encoders for different encodings, as all is in one place.
*
*/
/*
* liblognorm - a fast samples-based log normalization library
* Copyright 2010-2016 by Rainer Gerhards and Adiscon GmbH.
*
* Modified by Pavel Levshin (pavel@levshin.spb.ru) in 2013
*
* This file is part of liblognorm.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*
* A copy of the LGPL v2.1 can be found in the file "COPYING" in this distribution.
*/
#include "config.h"
#include
#include
#include
#include
#include
#include "lognorm.h"
#include "internal.h"
#include "enc.h"
#if 0
static char hexdigit[16] =
{'0', '1', '2', '3', '4', '5', '6', '7', '8',
'9', 'A', 'B', 'C', 'D', 'E', 'F' };
#endif
/* TODO: XML encoding for Unicode characters is as of RFC4627 not fully
* supported. The algorithm is that we must build the wide character from
* UTF-8 (if char > 127) and build the full 4-octet Unicode character out
* of it. Then, this needs to be encoded. Currently, we work on a
* byte-by-byte basis, which simply is incorrect.
* rgerhards, 2010-11-09
*/
static int
ln_addValue_XML(const char *value, es_str_t **str)
{
int r;
unsigned char c;
es_size_t i;
#if 0
char numbuf[4];
int j;
#endif
assert(str != NULL);
assert(*str != NULL);
assert(value != NULL);
// TODO: support other types!
es_addBuf(str, "", 7);
for(i = 0 ; i < strlen(value) ; ++i) {
c = value[i];
switch(c) {
case '\0':
es_addBuf(str, "", 5);
break;
#if 0
case '\n':
es_addBuf(str, "
", 5);
break;
case '\r':
es_addBuf(str, "
", 5);
break;
case '\t':
es_addBuf(str, "&x08;", 5);
break;
case '\"':
es_addBuf(str, """, 6);
break;
#endif
case '<':
es_addBuf(str, "<", 4);
break;
case '&':
es_addBuf(str, "&", 5);
break;
#if 0
case ',':
es_addBuf(str, "\\,", 2);
break;
case '\'':
es_addBuf(str, "'", 6);
break;
#endif
default:
es_addChar(str, c);
#if 0
/* TODO : proper Unicode encoding (see header comment) */
for(j = 0 ; j < 4 ; ++j) {
numbuf[3-j] = hexdigit[c % 16];
c = c / 16;
}
es_addBuf(str, "\\u", 2);
es_addBuf(str, numbuf, 4);
break;
#endif
}
}
es_addBuf(str, "", 8);
r = 0;
return r;
}
static int
ln_addField_XML(char *name, struct json_object *field, es_str_t **str)
{
int r;
int i;
const char *value;
struct json_object *obj;
assert(field != NULL);
assert(str != NULL);
assert(*str != NULL);
CHKR(es_addBuf(str, "", 2));
switch(json_object_get_type(field)) {
case json_type_array:
for (i = json_object_array_length(field) - 1; i >= 0; i--) {
CHKN(obj = json_object_array_get_idx(field, i));
CHKN(value = json_object_get_string(obj));
CHKR(ln_addValue_XML(value, str));
}
break;
case json_type_string:
case json_type_int:
CHKN(value = json_object_get_string(field));
CHKR(ln_addValue_XML(value, str));
break;
case json_type_null:
case json_type_boolean:
case json_type_double:
case json_type_object:
CHKR(es_addBuf(str, "***unsupported type***", sizeof("***unsupported type***")-1));
break;
default:
CHKR(es_addBuf(str, "***OBJECT***", sizeof("***OBJECT***")-1));
}
CHKR(es_addBuf(str, "", 8));
r = 0;
done:
return r;
}
static inline int
ln_addTags_XML(struct json_object *taglist, es_str_t **str)
{
int r = 0;
struct json_object *tagObj;
const char *tagCstr;
int i;
CHKR(es_addBuf(str, "", 12));
for (i = json_object_array_length(taglist) - 1; i >= 0; i--) {
CHKR(es_addBuf(str, "", 5));
CHKN(tagObj = json_object_array_get_idx(taglist, i));
CHKN(tagCstr = json_object_get_string(tagObj));
CHKR(es_addBuf(str, (char*)tagCstr, strlen(tagCstr)));
CHKR(es_addBuf(str, "", 6));
}
CHKR(es_addBuf(str, "", 13));
done: return r;
}
int
ln_fmtEventToXML(struct json_object *json, es_str_t **str)
{
int r = -1;
struct json_object *tags;
assert(json != NULL);
assert(json_object_is_type(json, json_type_object));
if((*str = es_newStr(256)) == NULL)
goto done;
es_addBuf(str, "", 7);
if(json_object_object_get_ex(json, "event.tags", &tags)) {
CHKR(ln_addTags_XML(tags, str));
}
struct json_object_iterator it = json_object_iter_begin(json);
struct json_object_iterator itEnd = json_object_iter_end(json);
while (!json_object_iter_equal(&it, &itEnd)) {
char *const name = (char*) json_object_iter_peek_name(&it);
if (strcmp(name, "event.tags")) {
ln_addField_XML(name, json_object_iter_peek_value(&it), str);
}
json_object_iter_next(&it);
}
es_addBuf(str, "", 8);
done:
return r;
}
liblognorm-2.0.6/compat/ 0000755 0001750 0001750 00000000000 13370251163 012150 5 0000000 0000000 liblognorm-2.0.6/compat/asprintf.c 0000644 0001750 0001750 00000002507 13370250151 014062 0000000 0000000 /* compatibility file for systems without asprintf.
*
* Copyright 2015 Rainer Gerhards and Adiscon
*
* This file is part of rsyslog.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* -or-
* see COPYING.ASL20 in the source distribution
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "config.h"
#ifndef HAVE_ASPRINTF
#include
#include
#include
int asprintf(char **strp, const char *fmt, ...)
{
va_list ap;
int len;
va_start(ap, fmt);
len = vsnprintf(NULL, 0, fmt, ap);
va_end(ap);
*strp = malloc(len+1);
if (!*strp) {
return -1;
}
va_start(ap, fmt);
vsnprintf(*strp, len+1, fmt, ap);
va_end(ap);
(*strp)[len] = 0;
return len;
}
#else
/* XLC needs at least one method in source file even static to compile */
#ifdef __xlc__
static void dummy() {}
#endif
#endif /* #ifndef HAVE_ASPRINTF */
liblognorm-2.0.6/compat/Makefile.am 0000644 0001750 0001750 00000000334 13370250151 014120 0000000 0000000 noinst_LTLIBRARIES = compat.la
compat_la_SOURCES = strndup.c asprintf.c
compat_la_CPPFLAGS = -I$(top_srcdir) $(PTHREADS_CFLAGS) $(RSRT_CFLAGS)
compat_la_LDFLAGS = -module -avoid-version
compat_la_LIBADD = $(IMUDP_LIBS)
liblognorm-2.0.6/compat/Makefile.in 0000644 0001750 0001750 00000050002 13370251154 014132 0000000 0000000 # Makefile.in generated by automake 1.15.1 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2017 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
subdir = compat
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \
$(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
$(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
CONFIG_CLEAN_VPATH_FILES =
LTLIBRARIES = $(noinst_LTLIBRARIES)
compat_la_DEPENDENCIES =
am_compat_la_OBJECTS = compat_la-strndup.lo compat_la-asprintf.lo
compat_la_OBJECTS = $(am_compat_la_OBJECTS)
AM_V_lt = $(am__v_lt_@AM_V@)
am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@)
am__v_lt_0 = --silent
am__v_lt_1 =
compat_la_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(compat_la_LDFLAGS) $(LDFLAGS) -o $@
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir)
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
am__mv = mv -f
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \
$(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \
$(AM_CFLAGS) $(CFLAGS)
AM_V_CC = $(am__v_CC_@AM_V@)
am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@)
am__v_CC_0 = @echo " CC " $@;
am__v_CC_1 =
CCLD = $(CC)
LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(AM_LDFLAGS) $(LDFLAGS) -o $@
AM_V_CCLD = $(am__v_CCLD_@AM_V@)
am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@)
am__v_CCLD_0 = @echo " CCLD " $@;
am__v_CCLD_1 =
SOURCES = $(compat_la_SOURCES)
DIST_SOURCES = $(compat_la_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
# Read a list of newline-separated strings from the standard input,
# and print each of them once, without duplicates. Input order is
# *not* preserved.
am__uniquify_input = $(AWK) '\
BEGIN { nonempty = 0; } \
{ items[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in items) print i; }; } \
'
# Make sure the list of sources is unique. This is necessary because,
# e.g., the same source file might be shared among _SOURCES variables
# for different programs/libraries.
am__define_uniq_tagged_files = \
list='$(am__tagged_files)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | $(am__uniquify_input)`
ETAGS = etags
CTAGS = ctags
am__DIST_COMMON = $(srcdir)/Makefile.in $(top_srcdir)/depcomp
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
DLLTOOL = @DLLTOOL@
DSYMUTIL = @DSYMUTIL@
DUMPBIN = @DUMPBIN@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
FEATURE_REGEXP = @FEATURE_REGEXP@
FGREP = @FGREP@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
JSON_C_CFLAGS = @JSON_C_CFLAGS@
JSON_C_LIBS = @JSON_C_LIBS@
LD = @LD@
LDFLAGS = @LDFLAGS@
LIBESTR_CFLAGS = @LIBESTR_CFLAGS@
LIBESTR_LIBS = @LIBESTR_LIBS@
LIBLOGNORM_CFLAGS = @LIBLOGNORM_CFLAGS@
LIBLOGNORM_LIBS = @LIBLOGNORM_LIBS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LIBTOOL = @LIBTOOL@
LIPO = @LIPO@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
MAKEINFO = @MAKEINFO@
MANIFEST_TOOL = @MANIFEST_TOOL@
MKDIR_P = @MKDIR_P@
NM = @NM@
NMEDIT = @NMEDIT@
OBJDUMP = @OBJDUMP@
OBJEXT = @OBJEXT@
OTOOL = @OTOOL@
OTOOL64 = @OTOOL64@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PCRE_CFLAGS = @PCRE_CFLAGS@
PCRE_LIBS = @PCRE_LIBS@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
RANLIB = @RANLIB@
SED = @SED@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
SPHINXBUILD = @SPHINXBUILD@
STRIP = @STRIP@
VALGRIND = @VALGRIND@
VERSION = @VERSION@
WARN_CFLAGS = @WARN_CFLAGS@
WARN_LDFLAGS = @WARN_LDFLAGS@
WARN_SCANNERFLAGS = @WARN_SCANNERFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_AR = @ac_ct_AR@
ac_ct_CC = @ac_ct_CC@
ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host = @host@
host_alias = @host_alias@
host_cpu = @host_cpu@
host_os = @host_os@
host_vendor = @host_vendor@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkg_config_libs_private = @pkg_config_libs_private@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
noinst_LTLIBRARIES = compat.la
compat_la_SOURCES = strndup.c asprintf.c
compat_la_CPPFLAGS = -I$(top_srcdir) $(PTHREADS_CFLAGS) $(RSRT_CFLAGS)
compat_la_LDFLAGS = -module -avoid-version
compat_la_LIBADD = $(IMUDP_LIBS)
all: all-am
.SUFFIXES:
.SUFFIXES: .c .lo .o .obj
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
&& { if test -f $@; then exit 0; else break; fi; }; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu compat/Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --gnu compat/Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(am__aclocal_m4_deps):
clean-noinstLTLIBRARIES:
-test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES)
@list='$(noinst_LTLIBRARIES)'; \
locs=`for p in $$list; do echo $$p; done | \
sed 's|^[^/]*$$|.|; s|/[^/]*$$||; s|$$|/so_locations|' | \
sort -u`; \
test -z "$$locs" || { \
echo rm -f $${locs}; \
rm -f $${locs}; \
}
compat.la: $(compat_la_OBJECTS) $(compat_la_DEPENDENCIES) $(EXTRA_compat_la_DEPENDENCIES)
$(AM_V_CCLD)$(compat_la_LINK) $(compat_la_OBJECTS) $(compat_la_LIBADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/compat_la-asprintf.Plo@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/compat_la-strndup.Plo@am__quote@
.c.o:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $<
.c.obj:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'`
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
.c.lo:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $<
compat_la-strndup.lo: strndup.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(compat_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT compat_la-strndup.lo -MD -MP -MF $(DEPDIR)/compat_la-strndup.Tpo -c -o compat_la-strndup.lo `test -f 'strndup.c' || echo '$(srcdir)/'`strndup.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/compat_la-strndup.Tpo $(DEPDIR)/compat_la-strndup.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='strndup.c' object='compat_la-strndup.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(compat_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o compat_la-strndup.lo `test -f 'strndup.c' || echo '$(srcdir)/'`strndup.c
compat_la-asprintf.lo: asprintf.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(compat_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT compat_la-asprintf.lo -MD -MP -MF $(DEPDIR)/compat_la-asprintf.Tpo -c -o compat_la-asprintf.lo `test -f 'asprintf.c' || echo '$(srcdir)/'`asprintf.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/compat_la-asprintf.Tpo $(DEPDIR)/compat_la-asprintf.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='asprintf.c' object='compat_la-asprintf.lo' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(compat_la_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o compat_la-asprintf.lo `test -f 'asprintf.c' || echo '$(srcdir)/'`asprintf.c
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
ID: $(am__tagged_files)
$(am__define_uniq_tagged_files); mkid -fID $$unique
tags: tags-am
TAGS: tags
tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
set x; \
here=`pwd`; \
$(am__define_uniq_tagged_files); \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: ctags-am
CTAGS: ctags
ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
$(am__define_uniq_tagged_files); \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscopelist: cscopelist-am
cscopelist-am: $(am__tagged_files)
list='$(am__tagged_files)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(LTLIBRARIES)
installdirs:
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic clean-libtool clean-noinstLTLIBRARIES \
mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am:
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am:
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man:
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am:
.MAKE: install-am install-strip
.PHONY: CTAGS GTAGS TAGS all all-am check check-am clean clean-generic \
clean-libtool clean-noinstLTLIBRARIES cscopelist-am ctags \
ctags-am distclean distclean-compile distclean-generic \
distclean-libtool distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am install-info \
install-info-am install-man install-pdf install-pdf-am \
install-ps install-ps-am install-strip installcheck \
installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
tags tags-am uninstall uninstall-am
.PRECIOUS: Makefile
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:
liblognorm-2.0.6/compat/strndup.c 0000644 0001750 0001750 00000002432 13273030617 013735 0000000 0000000 /* compatibility file for systems without strndup.
*
* Copyright 2015 Rainer Gerhards and Adiscon
*
* This file is part of liblognorm.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
* -or-
* see COPYING.ASL20 in the source distribution
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "config.h"
#ifndef HAVE_STRNDUP
#include
#include
char *
strndup(const char *s, size_t n)
{
const size_t len = strlen(s);
if(len <= n)
return strdup(s);
char *const new_s = malloc(len+1);
if(new_s == NULL)
return NULL;
memcpy(new_s, s, len);
new_s[len] = '\0';
return new_s;
}
#else /* #ifndef HAVE_STRNDUP */
/* Solaris must have at least one symbol inside a file, so we provide
* it here ;-)
*/
void dummy_dummy_required_for_solaris_do_not_use(void)
{
}
#endif /* #ifndef HAVE_STRNDUP */
liblognorm-2.0.6/aclocal.m4 0000644 0001750 0001750 00000261243 13370251153 012454 0000000 0000000 # generated automatically by aclocal 1.15.1 -*- Autoconf -*-
# Copyright (C) 1996-2017 Free Software Foundation, Inc.
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
m4_ifndef([AC_AUTOCONF_VERSION],
[m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.69],,
[m4_warning([this file was generated for autoconf 2.69.
You have another version of autoconf. It may work, but is not guaranteed to.
If you have problems, you may need to regenerate the build system entirely.
To do so, use the procedure documented by the package, typically 'autoreconf'.])])
# ============================================================================
# https://www.gnu.org/software/autoconf-archive/ax_append_compile_flags.html
# ============================================================================
#
# SYNOPSIS
#
# AX_APPEND_COMPILE_FLAGS([FLAG1 FLAG2 ...], [FLAGS-VARIABLE], [EXTRA-FLAGS], [INPUT])
#
# DESCRIPTION
#
# For every FLAG1, FLAG2 it is checked whether the compiler works with the
# flag. If it does, the flag is added FLAGS-VARIABLE
#
# If FLAGS-VARIABLE is not specified, the current language's flags (e.g.
# CFLAGS) is used. During the check the flag is always added to the
# current language's flags.
#
# If EXTRA-FLAGS is defined, it is added to the current language's default
# flags (e.g. CFLAGS) when the check is done. The check is thus made with
# the flags: "CFLAGS EXTRA-FLAGS FLAG". This can for example be used to
# force the compiler to issue an error when a bad flag is given.
#
# INPUT gives an alternative input source to AC_COMPILE_IFELSE.
#
# NOTE: This macro depends on the AX_APPEND_FLAG and
# AX_CHECK_COMPILE_FLAG. Please keep this macro in sync with
# AX_APPEND_LINK_FLAGS.
#
# LICENSE
#
# Copyright (c) 2011 Maarten Bosmans
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see .
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 6
AC_DEFUN([AX_APPEND_COMPILE_FLAGS],
[AX_REQUIRE_DEFINED([AX_CHECK_COMPILE_FLAG])
AX_REQUIRE_DEFINED([AX_APPEND_FLAG])
for flag in $1; do
AX_CHECK_COMPILE_FLAG([$flag], [AX_APPEND_FLAG([$flag], [$2])], [], [$3], [$4])
done
])dnl AX_APPEND_COMPILE_FLAGS
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_append_flag.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_APPEND_FLAG(FLAG, [FLAGS-VARIABLE])
#
# DESCRIPTION
#
# FLAG is appended to the FLAGS-VARIABLE shell variable, with a space
# added in between.
#
# If FLAGS-VARIABLE is not specified, the current language's flags (e.g.
# CFLAGS) is used. FLAGS-VARIABLE is not changed if it already contains
# FLAG. If FLAGS-VARIABLE is unset in the shell, it is set to exactly
# FLAG.
#
# NOTE: Implementation based on AX_CFLAGS_GCC_OPTION.
#
# LICENSE
#
# Copyright (c) 2008 Guido U. Draheim
# Copyright (c) 2011 Maarten Bosmans
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see .
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 7
AC_DEFUN([AX_APPEND_FLAG],
[dnl
AC_PREREQ(2.64)dnl for _AC_LANG_PREFIX and AS_VAR_SET_IF
AS_VAR_PUSHDEF([FLAGS], [m4_default($2,_AC_LANG_PREFIX[FLAGS])])
AS_VAR_SET_IF(FLAGS,[
AS_CASE([" AS_VAR_GET(FLAGS) "],
[*" $1 "*], [AC_RUN_LOG([: FLAGS already contains $1])],
[
AS_VAR_APPEND(FLAGS,[" $1"])
AC_RUN_LOG([: FLAGS="$FLAGS"])
])
],
[
AS_VAR_SET(FLAGS,[$1])
AC_RUN_LOG([: FLAGS="$FLAGS"])
])
AS_VAR_POPDEF([FLAGS])dnl
])dnl AX_APPEND_FLAG
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_append_link_flags.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_APPEND_LINK_FLAGS([FLAG1 FLAG2 ...], [FLAGS-VARIABLE], [EXTRA-FLAGS], [INPUT])
#
# DESCRIPTION
#
# For every FLAG1, FLAG2 it is checked whether the linker works with the
# flag. If it does, the flag is added FLAGS-VARIABLE
#
# If FLAGS-VARIABLE is not specified, the linker's flags (LDFLAGS) is
# used. During the check the flag is always added to the linker's flags.
#
# If EXTRA-FLAGS is defined, it is added to the linker's default flags
# when the check is done. The check is thus made with the flags: "LDFLAGS
# EXTRA-FLAGS FLAG". This can for example be used to force the linker to
# issue an error when a bad flag is given.
#
# INPUT gives an alternative input source to AC_COMPILE_IFELSE.
#
# NOTE: This macro depends on the AX_APPEND_FLAG and AX_CHECK_LINK_FLAG.
# Please keep this macro in sync with AX_APPEND_COMPILE_FLAGS.
#
# LICENSE
#
# Copyright (c) 2011 Maarten Bosmans
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see .
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 6
AC_DEFUN([AX_APPEND_LINK_FLAGS],
[AX_REQUIRE_DEFINED([AX_CHECK_LINK_FLAG])
AX_REQUIRE_DEFINED([AX_APPEND_FLAG])
for flag in $1; do
AX_CHECK_LINK_FLAG([$flag], [AX_APPEND_FLAG([$flag], [m4_default([$2], [LDFLAGS])])], [], [$3], [$4])
done
])dnl AX_APPEND_LINK_FLAGS
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_check_compile_flag.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_CHECK_COMPILE_FLAG(FLAG, [ACTION-SUCCESS], [ACTION-FAILURE], [EXTRA-FLAGS], [INPUT])
#
# DESCRIPTION
#
# Check whether the given FLAG works with the current language's compiler
# or gives an error. (Warnings, however, are ignored)
#
# ACTION-SUCCESS/ACTION-FAILURE are shell commands to execute on
# success/failure.
#
# If EXTRA-FLAGS is defined, it is added to the current language's default
# flags (e.g. CFLAGS) when the check is done. The check is thus made with
# the flags: "CFLAGS EXTRA-FLAGS FLAG". This can for example be used to
# force the compiler to issue an error when a bad flag is given.
#
# INPUT gives an alternative input source to AC_COMPILE_IFELSE.
#
# NOTE: Implementation based on AX_CFLAGS_GCC_OPTION. Please keep this
# macro in sync with AX_CHECK_{PREPROC,LINK}_FLAG.
#
# LICENSE
#
# Copyright (c) 2008 Guido U. Draheim
# Copyright (c) 2011 Maarten Bosmans
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see .
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 5
AC_DEFUN([AX_CHECK_COMPILE_FLAG],
[AC_PREREQ(2.64)dnl for _AC_LANG_PREFIX and AS_VAR_IF
AS_VAR_PUSHDEF([CACHEVAR],[ax_cv_check_[]_AC_LANG_ABBREV[]flags_$4_$1])dnl
AC_CACHE_CHECK([whether _AC_LANG compiler accepts $1], CACHEVAR, [
ax_check_save_flags=$[]_AC_LANG_PREFIX[]FLAGS
_AC_LANG_PREFIX[]FLAGS="$[]_AC_LANG_PREFIX[]FLAGS $4 $1"
AC_COMPILE_IFELSE([m4_default([$5],[AC_LANG_PROGRAM()])],
[AS_VAR_SET(CACHEVAR,[yes])],
[AS_VAR_SET(CACHEVAR,[no])])
_AC_LANG_PREFIX[]FLAGS=$ax_check_save_flags])
AS_VAR_IF(CACHEVAR,yes,
[m4_default([$2], :)],
[m4_default([$3], :)])
AS_VAR_POPDEF([CACHEVAR])dnl
])dnl AX_CHECK_COMPILE_FLAGS
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_check_link_flag.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_CHECK_LINK_FLAG(FLAG, [ACTION-SUCCESS], [ACTION-FAILURE], [EXTRA-FLAGS], [INPUT])
#
# DESCRIPTION
#
# Check whether the given FLAG works with the linker or gives an error.
# (Warnings, however, are ignored)
#
# ACTION-SUCCESS/ACTION-FAILURE are shell commands to execute on
# success/failure.
#
# If EXTRA-FLAGS is defined, it is added to the linker's default flags
# when the check is done. The check is thus made with the flags: "LDFLAGS
# EXTRA-FLAGS FLAG". This can for example be used to force the linker to
# issue an error when a bad flag is given.
#
# INPUT gives an alternative input source to AC_LINK_IFELSE.
#
# NOTE: Implementation based on AX_CFLAGS_GCC_OPTION. Please keep this
# macro in sync with AX_CHECK_{PREPROC,COMPILE}_FLAG.
#
# LICENSE
#
# Copyright (c) 2008 Guido U. Draheim
# Copyright (c) 2011 Maarten Bosmans
#
# This program is free software: you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation, either version 3 of the License, or (at your
# option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program. If not, see .
#
# As a special exception, the respective Autoconf Macro's copyright owner
# gives unlimited permission to copy, distribute and modify the configure
# scripts that are the output of Autoconf when processing the Macro. You
# need not follow the terms of the GNU General Public License when using
# or distributing such scripts, even though portions of the text of the
# Macro appear in them. The GNU General Public License (GPL) does govern
# all other use of the material that constitutes the Autoconf Macro.
#
# This special exception to the GPL applies to versions of the Autoconf
# Macro released by the Autoconf Archive. When you make and distribute a
# modified version of the Autoconf Macro, you may extend this special
# exception to the GPL to apply to your modified version as well.
#serial 5
AC_DEFUN([AX_CHECK_LINK_FLAG],
[AC_PREREQ(2.64)dnl for _AC_LANG_PREFIX and AS_VAR_IF
AS_VAR_PUSHDEF([CACHEVAR],[ax_cv_check_ldflags_$4_$1])dnl
AC_CACHE_CHECK([whether the linker accepts $1], CACHEVAR, [
ax_check_save_flags=$LDFLAGS
LDFLAGS="$LDFLAGS $4 $1"
AC_LINK_IFELSE([m4_default([$5],[AC_LANG_PROGRAM()])],
[AS_VAR_SET(CACHEVAR,[yes])],
[AS_VAR_SET(CACHEVAR,[no])])
LDFLAGS=$ax_check_save_flags])
AS_VAR_IF(CACHEVAR,yes,
[m4_default([$2], :)],
[m4_default([$3], :)])
AS_VAR_POPDEF([CACHEVAR])dnl
])dnl AX_CHECK_LINK_FLAGS
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_compiler_flags.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_COMPILER_FLAGS([CFLAGS-VARIABLE], [LDFLAGS-VARIABLE], [IS-RELEASE], [EXTRA-BASE-CFLAGS], [EXTRA-YES-CFLAGS], [UNUSED], [UNUSED], [UNUSED], [EXTRA-BASE-LDFLAGS], [EXTRA-YES-LDFLAGS], [UNUSED], [UNUSED], [UNUSED])
#
# DESCRIPTION
#
# Check for the presence of an --enable-compile-warnings option to
# configure, defaulting to "error" in normal operation, or "yes" if
# IS-RELEASE is equal to "yes". Return the value in the variable
# $ax_enable_compile_warnings.
#
# Depending on the value of --enable-compile-warnings, different compiler
# warnings are checked to see if they work with the current compiler and,
# if so, are appended to CFLAGS-VARIABLE and LDFLAGS-VARIABLE. This
# allows a consistent set of baseline compiler warnings to be used across
# a code base, irrespective of any warnings enabled locally by individual
# developers. By standardising the warnings used by all developers of a
# project, the project can commit to a zero-warnings policy, using -Werror
# to prevent compilation if new warnings are introduced. This makes
# catching bugs which are flagged by warnings a lot easier.
#
# By providing a consistent --enable-compile-warnings argument across all
# projects using this macro, continuous integration systems can easily be
# configured the same for all projects. Automated systems or build
# systems aimed at beginners may want to pass the --disable-Werror
# argument to unconditionally prevent warnings being fatal.
#
# --enable-compile-warnings can take the values:
#
# * no: Base compiler warnings only; not even -Wall.
# * yes: The above, plus a broad range of useful warnings.
# * error: The above, plus -Werror so that all warnings are fatal.
# Use --disable-Werror to override this and disable fatal
# warnings.
#
# The set of base and enabled flags can be augmented using the
# EXTRA-*-CFLAGS and EXTRA-*-LDFLAGS variables, which are tested and
# appended to the output variable if --enable-compile-warnings is not
# "no". Flags should not be disabled using these arguments, as the entire
# point of AX_COMPILER_FLAGS is to enforce a consistent set of useful
# compiler warnings on code, using warnings which have been chosen for low
# false positive rates. If a compiler emits false positives for a
# warning, a #pragma should be used in the code to disable the warning
# locally. See:
#
# https://gcc.gnu.org/onlinedocs/gcc-4.9.2/gcc/Diagnostic-Pragmas.html#Diagnostic-Pragmas
#
# The EXTRA-* variables should only be used to supply extra warning flags,
# and not general purpose compiler flags, as they are controlled by
# configure options such as --disable-Werror.
#
# IS-RELEASE can be used to disable -Werror when making a release, which
# is useful for those hairy moments when you just want to get the release
# done as quickly as possible. Set it to "yes" to disable -Werror. By
# default, it uses the value of $ax_is_release, so if you are using the
# AX_IS_RELEASE macro, there is no need to pass this parameter. For
# example:
#
# AX_IS_RELEASE([git-directory])
# AX_COMPILER_FLAGS()
#
# CFLAGS-VARIABLE defaults to WARN_CFLAGS, and LDFLAGS-VARIABLE defaults
# to WARN_LDFLAGS. Both variables are AC_SUBST-ed by this macro, but must
# be manually added to the CFLAGS and LDFLAGS variables for each target in
# the code base.
#
# If C++ language support is enabled with AC_PROG_CXX, which must occur
# before this macro in configure.ac, warning flags for the C++ compiler
# are AC_SUBST-ed as WARN_CXXFLAGS, and must be manually added to the
# CXXFLAGS variables for each target in the code base. EXTRA-*-CFLAGS can
# be used to augment the base and enabled flags.
#
# Warning flags for g-ir-scanner (from GObject Introspection) are
# AC_SUBST-ed as WARN_SCANNERFLAGS. This variable must be manually added
# to the SCANNERFLAGS variable for each GIR target in the code base. If
# extra g-ir-scanner flags need to be enabled, the AX_COMPILER_FLAGS_GIR
# macro must be invoked manually.
#
# AX_COMPILER_FLAGS may add support for other tools in future, in addition
# to the compiler and linker. No extra EXTRA-* variables will be added
# for those tools, and all extra support will still use the single
# --enable-compile-warnings configure option. For finer grained control
# over the flags for individual tools, use AX_COMPILER_FLAGS_CFLAGS,
# AX_COMPILER_FLAGS_LDFLAGS and AX_COMPILER_FLAGS_* for new tools.
#
# The UNUSED variables date from a previous version of this macro, and are
# automatically appended to the preceding non-UNUSED variable. They should
# be left empty in new uses of the macro.
#
# LICENSE
#
# Copyright (c) 2014, 2015 Philip Withnall
# Copyright (c) 2015 David King
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 14
# _AX_COMPILER_FLAGS_LANG([LANGNAME])
m4_defun([_AX_COMPILER_FLAGS_LANG],
[m4_ifdef([_AX_COMPILER_FLAGS_LANG_]$1[_enabled], [],
[m4_define([_AX_COMPILER_FLAGS_LANG_]$1[_enabled], [])dnl
AX_REQUIRE_DEFINED([AX_COMPILER_FLAGS_]$1[FLAGS])])dnl
])
AC_DEFUN([AX_COMPILER_FLAGS],[
# C support is enabled by default.
_AX_COMPILER_FLAGS_LANG([C])
# Only enable C++ support if AC_PROG_CXX is called. The redefinition of
# AC_PROG_CXX is so that a fatal error is emitted if this macro is called
# before AC_PROG_CXX, which would otherwise cause no C++ warnings to be
# checked.
AC_PROVIDE_IFELSE([AC_PROG_CXX],
[_AX_COMPILER_FLAGS_LANG([CXX])],
[m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[_AX_COMPILER_FLAGS_LANG([CXX])])])
AX_REQUIRE_DEFINED([AX_COMPILER_FLAGS_LDFLAGS])
# Default value for IS-RELEASE is $ax_is_release
ax_compiler_flags_is_release=m4_tolower(m4_normalize(ifelse([$3],,
[$ax_is_release],
[$3])))
AC_ARG_ENABLE([compile-warnings],
AS_HELP_STRING([--enable-compile-warnings=@<:@no/yes/error@:>@],
[Enable compiler warnings and errors]),,
[AS_IF([test "$ax_compiler_flags_is_release" = "yes"],
[enable_compile_warnings="yes"],
[enable_compile_warnings="error"])])
AC_ARG_ENABLE([Werror],
AS_HELP_STRING([--disable-Werror],
[Unconditionally make all compiler warnings non-fatal]),,
[enable_Werror=maybe])
# Return the user's chosen warning level
AS_IF([test "$enable_Werror" = "no" -a \
"$enable_compile_warnings" = "error"],[
enable_compile_warnings="yes"
])
ax_enable_compile_warnings=$enable_compile_warnings
AX_COMPILER_FLAGS_CFLAGS([$1],[$ax_compiler_flags_is_release],
[$4],[$5 $6 $7 $8])
m4_ifdef([_AX_COMPILER_FLAGS_LANG_CXX_enabled],
[AX_COMPILER_FLAGS_CXXFLAGS([WARN_CXXFLAGS],
[$ax_compiler_flags_is_release],
[$4],[$5 $6 $7 $8])])
AX_COMPILER_FLAGS_LDFLAGS([$2],[$ax_compiler_flags_is_release],
[$9],[$10 $11 $12 $13])
AX_COMPILER_FLAGS_GIR([WARN_SCANNERFLAGS],[$ax_compiler_flags_is_release])
])dnl AX_COMPILER_FLAGS
# =============================================================================
# https://www.gnu.org/software/autoconf-archive/ax_compiler_flags_cflags.html
# =============================================================================
#
# SYNOPSIS
#
# AX_COMPILER_FLAGS_CFLAGS([VARIABLE], [IS-RELEASE], [EXTRA-BASE-FLAGS], [EXTRA-YES-FLAGS])
#
# DESCRIPTION
#
# Add warning flags for the C compiler to VARIABLE, which defaults to
# WARN_CFLAGS. VARIABLE is AC_SUBST-ed by this macro, but must be
# manually added to the CFLAGS variable for each target in the code base.
#
# This macro depends on the environment set up by AX_COMPILER_FLAGS.
# Specifically, it uses the value of $ax_enable_compile_warnings to decide
# which flags to enable.
#
# LICENSE
#
# Copyright (c) 2014, 2015 Philip Withnall
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 14
AC_DEFUN([AX_COMPILER_FLAGS_CFLAGS],[
AC_REQUIRE([AC_PROG_SED])
AX_REQUIRE_DEFINED([AX_APPEND_COMPILE_FLAGS])
AX_REQUIRE_DEFINED([AX_APPEND_FLAG])
AX_REQUIRE_DEFINED([AX_CHECK_COMPILE_FLAG])
# Variable names
m4_define([ax_warn_cflags_variable],
[m4_normalize(ifelse([$1],,[WARN_CFLAGS],[$1]))])
AC_LANG_PUSH([C])
# Always pass -Werror=unknown-warning-option to get Clang to fail on bad
# flags, otherwise they are always appended to the warn_cflags variable, and
# Clang warns on them for every compilation unit.
# If this is passed to GCC, it will explode, so the flag must be enabled
# conditionally.
AX_CHECK_COMPILE_FLAG([-Werror=unknown-warning-option],[
ax_compiler_flags_test="-Werror=unknown-warning-option"
],[
ax_compiler_flags_test=""
])
# Check that -Wno-suggest-attribute=format is supported
AX_CHECK_COMPILE_FLAG([-Wno-suggest-attribute=format],[
ax_compiler_no_suggest_attribute_flags="-Wno-suggest-attribute=format"
],[
ax_compiler_no_suggest_attribute_flags=""
])
# Base flags
AX_APPEND_COMPILE_FLAGS([ dnl
-fno-strict-aliasing dnl
$3 dnl
],ax_warn_cflags_variable,[$ax_compiler_flags_test])
AS_IF([test "$ax_enable_compile_warnings" != "no"],[
# "yes" flags
AX_APPEND_COMPILE_FLAGS([ dnl
-Wall dnl
-Wextra dnl
-Wundef dnl
-Wnested-externs dnl
-Wwrite-strings dnl
-Wpointer-arith dnl
-Wmissing-declarations dnl
-Wmissing-prototypes dnl
-Wstrict-prototypes dnl
-Wredundant-decls dnl
-Wno-unused-parameter dnl
-Wno-missing-field-initializers dnl
-Wdeclaration-after-statement dnl
-Wformat=2 dnl
-Wold-style-definition dnl
-Wcast-align dnl
-Wformat-nonliteral dnl
-Wformat-security dnl
-Wsign-compare dnl
-Wstrict-aliasing dnl
-Wshadow dnl
-Winline dnl
-Wpacked dnl
-Wmissing-format-attribute dnl
-Wmissing-noreturn dnl
-Winit-self dnl
-Wredundant-decls dnl
-Wmissing-include-dirs dnl
-Wunused-but-set-variable dnl
-Warray-bounds dnl
-Wimplicit-function-declaration dnl
-Wreturn-type dnl
-Wswitch-enum dnl
-Wswitch-default dnl
$4 dnl
$5 dnl
$6 dnl
$7 dnl
],ax_warn_cflags_variable,[$ax_compiler_flags_test])
])
AS_IF([test "$ax_enable_compile_warnings" = "error"],[
# "error" flags; -Werror has to be appended unconditionally because
# it's not possible to test for
#
# suggest-attribute=format is disabled because it gives too many false
# positives
AX_APPEND_FLAG([-Werror],ax_warn_cflags_variable)
AX_APPEND_COMPILE_FLAGS([ dnl
[$ax_compiler_no_suggest_attribute_flags] dnl
],ax_warn_cflags_variable,[$ax_compiler_flags_test])
])
# In the flags below, when disabling specific flags, always add *both*
# -Wno-foo and -Wno-error=foo. This fixes the situation where (for example)
# we enable -Werror, disable a flag, and a build bot passes CFLAGS=-Wall,
# which effectively turns that flag back on again as an error.
for flag in $ax_warn_cflags_variable; do
AS_CASE([$flag],
[-Wno-*=*],[],
[-Wno-*],[
AX_APPEND_COMPILE_FLAGS([-Wno-error=$(AS_ECHO([$flag]) | $SED 's/^-Wno-//')],
ax_warn_cflags_variable,
[$ax_compiler_flags_test])
])
done
AC_LANG_POP([C])
# Substitute the variables
AC_SUBST(ax_warn_cflags_variable)
])dnl AX_COMPILER_FLAGS
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_compiler_flags_gir.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_COMPILER_FLAGS_GIR([VARIABLE], [IS-RELEASE], [EXTRA-BASE-FLAGS], [EXTRA-YES-FLAGS])
#
# DESCRIPTION
#
# Add warning flags for the g-ir-scanner (from GObject Introspection) to
# VARIABLE, which defaults to WARN_SCANNERFLAGS. VARIABLE is AC_SUBST-ed
# by this macro, but must be manually added to the SCANNERFLAGS variable
# for each GIR target in the code base.
#
# This macro depends on the environment set up by AX_COMPILER_FLAGS.
# Specifically, it uses the value of $ax_enable_compile_warnings to decide
# which flags to enable.
#
# LICENSE
#
# Copyright (c) 2015 Philip Withnall
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 6
AC_DEFUN([AX_COMPILER_FLAGS_GIR],[
AX_REQUIRE_DEFINED([AX_APPEND_FLAG])
# Variable names
m4_define([ax_warn_scannerflags_variable],
[m4_normalize(ifelse([$1],,[WARN_SCANNERFLAGS],[$1]))])
# Base flags
AX_APPEND_FLAG([$3],ax_warn_scannerflags_variable)
AS_IF([test "$ax_enable_compile_warnings" != "no"],[
# "yes" flags
AX_APPEND_FLAG([ dnl
--warn-all dnl
$4 dnl
$5 dnl
$6 dnl
$7 dnl
],ax_warn_scannerflags_variable)
])
AS_IF([test "$ax_enable_compile_warnings" = "error"],[
# "error" flags
AX_APPEND_FLAG([ dnl
--warn-error dnl
],ax_warn_scannerflags_variable)
])
# Substitute the variables
AC_SUBST(ax_warn_scannerflags_variable)
])dnl AX_COMPILER_FLAGS
# ==============================================================================
# https://www.gnu.org/software/autoconf-archive/ax_compiler_flags_ldflags.html
# ==============================================================================
#
# SYNOPSIS
#
# AX_COMPILER_FLAGS_LDFLAGS([VARIABLE], [IS-RELEASE], [EXTRA-BASE-FLAGS], [EXTRA-YES-FLAGS])
#
# DESCRIPTION
#
# Add warning flags for the linker to VARIABLE, which defaults to
# WARN_LDFLAGS. VARIABLE is AC_SUBST-ed by this macro, but must be
# manually added to the LDFLAGS variable for each target in the code base.
#
# This macro depends on the environment set up by AX_COMPILER_FLAGS.
# Specifically, it uses the value of $ax_enable_compile_warnings to decide
# which flags to enable.
#
# LICENSE
#
# Copyright (c) 2014, 2015 Philip Withnall
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 8
AC_DEFUN([AX_COMPILER_FLAGS_LDFLAGS],[
AX_REQUIRE_DEFINED([AX_APPEND_LINK_FLAGS])
AX_REQUIRE_DEFINED([AX_APPEND_FLAG])
AX_REQUIRE_DEFINED([AX_CHECK_COMPILE_FLAG])
AX_REQUIRE_DEFINED([AX_CHECK_LINK_FLAG])
# Variable names
m4_define([ax_warn_ldflags_variable],
[m4_normalize(ifelse([$1],,[WARN_LDFLAGS],[$1]))])
# Always pass -Werror=unknown-warning-option to get Clang to fail on bad
# flags, otherwise they are always appended to the warn_ldflags variable,
# and Clang warns on them for every compilation unit.
# If this is passed to GCC, it will explode, so the flag must be enabled
# conditionally.
AX_CHECK_COMPILE_FLAG([-Werror=unknown-warning-option],[
ax_compiler_flags_test="-Werror=unknown-warning-option"
],[
ax_compiler_flags_test=""
])
# macOS linker does not have --as-needed
AX_CHECK_LINK_FLAG([-Wl,--no-as-needed], [
ax_compiler_flags_as_needed_option="-Wl,--no-as-needed"
], [
ax_compiler_flags_as_needed_option=""
])
# macOS linker speaks with a different accent
ax_compiler_flags_fatal_warnings_option=""
AX_CHECK_LINK_FLAG([-Wl,--fatal-warnings], [
ax_compiler_flags_fatal_warnings_option="-Wl,--fatal-warnings"
])
AX_CHECK_LINK_FLAG([-Wl,-fatal_warnings], [
ax_compiler_flags_fatal_warnings_option="-Wl,-fatal_warnings"
])
# Base flags
AX_APPEND_LINK_FLAGS([ dnl
$ax_compiler_flags_as_needed_option dnl
$3 dnl
],ax_warn_ldflags_variable,[$ax_compiler_flags_test])
AS_IF([test "$ax_enable_compile_warnings" != "no"],[
# "yes" flags
AX_APPEND_LINK_FLAGS([$4 $5 $6 $7],
ax_warn_ldflags_variable,
[$ax_compiler_flags_test])
])
AS_IF([test "$ax_enable_compile_warnings" = "error"],[
# "error" flags; -Werror has to be appended unconditionally because
# it's not possible to test for
#
# suggest-attribute=format is disabled because it gives too many false
# positives
AX_APPEND_LINK_FLAGS([ dnl
$ax_compiler_flags_fatal_warnings_option dnl
],ax_warn_ldflags_variable,[$ax_compiler_flags_test])
])
# Substitute the variables
AC_SUBST(ax_warn_ldflags_variable)
])dnl AX_COMPILER_FLAGS
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_is_release.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_IS_RELEASE(POLICY)
#
# DESCRIPTION
#
# Determine whether the code is being configured as a release, or from
# git. Set the ax_is_release variable to 'yes' or 'no'.
#
# If building a release version, it is recommended that the configure
# script disable compiler errors and debug features, by conditionalising
# them on the ax_is_release variable. If building from git, these
# features should be enabled.
#
# The POLICY parameter specifies how ax_is_release is determined. It can
# take the following values:
#
# * git-directory: ax_is_release will be 'no' if a '.git' directory exists
# * minor-version: ax_is_release will be 'no' if the minor version number
# in $PACKAGE_VERSION is odd; this assumes
# $PACKAGE_VERSION follows the 'major.minor.micro' scheme
# * micro-version: ax_is_release will be 'no' if the micro version number
# in $PACKAGE_VERSION is odd; this assumes
# $PACKAGE_VERSION follows the 'major.minor.micro' scheme
# * dash-version: ax_is_release will be 'no' if there is a dash '-'
# in $PACKAGE_VERSION, for example 1.2-pre3, 1.2.42-a8b9
# or 2.0-dirty (in particular this is suitable for use
# with git-version-gen)
# * always: ax_is_release will always be 'yes'
# * never: ax_is_release will always be 'no'
#
# Other policies may be added in future.
#
# LICENSE
#
# Copyright (c) 2015 Philip Withnall
# Copyright (c) 2016 Collabora Ltd.
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved.
#serial 7
AC_DEFUN([AX_IS_RELEASE],[
AC_BEFORE([AC_INIT],[$0])
m4_case([$1],
[git-directory],[
# $is_release = (.git directory does not exist)
AS_IF([test -d ${srcdir}/.git],[ax_is_release=no],[ax_is_release=yes])
],
[minor-version],[
# $is_release = ($minor_version is even)
minor_version=`echo "$PACKAGE_VERSION" | sed 's/[[^.]][[^.]]*.\([[^.]][[^.]]*\).*/\1/'`
AS_IF([test "$(( $minor_version % 2 ))" -ne 0],
[ax_is_release=no],[ax_is_release=yes])
],
[micro-version],[
# $is_release = ($micro_version is even)
micro_version=`echo "$PACKAGE_VERSION" | sed 's/[[^.]]*\.[[^.]]*\.\([[^.]]*\).*/\1/'`
AS_IF([test "$(( $micro_version % 2 ))" -ne 0],
[ax_is_release=no],[ax_is_release=yes])
],
[dash-version],[
# $is_release = ($PACKAGE_VERSION has a dash)
AS_CASE([$PACKAGE_VERSION],
[*-*], [ax_is_release=no],
[*], [ax_is_release=yes])
],
[always],[ax_is_release=yes],
[never],[ax_is_release=no],
[
AC_MSG_ERROR([Invalid policy. Valid policies: git-directory, minor-version, micro-version, dash-version, always, never.])
])
])
# ===========================================================================
# https://www.gnu.org/software/autoconf-archive/ax_require_defined.html
# ===========================================================================
#
# SYNOPSIS
#
# AX_REQUIRE_DEFINED(MACRO)
#
# DESCRIPTION
#
# AX_REQUIRE_DEFINED is a simple helper for making sure other macros have
# been defined and thus are available for use. This avoids random issues
# where a macro isn't expanded. Instead the configure script emits a
# non-fatal:
#
# ./configure: line 1673: AX_CFLAGS_WARN_ALL: command not found
#
# It's like AC_REQUIRE except it doesn't expand the required macro.
#
# Here's an example:
#
# AX_REQUIRE_DEFINED([AX_CHECK_LINK_FLAG])
#
# LICENSE
#
# Copyright (c) 2014 Mike Frysinger
#
# Copying and distribution of this file, with or without modification, are
# permitted in any medium without royalty provided the copyright notice
# and this notice are preserved. This file is offered as-is, without any
# warranty.
#serial 2
AC_DEFUN([AX_REQUIRE_DEFINED], [dnl
m4_ifndef([$1], [m4_fatal([macro ]$1[ is not defined; is a m4 file missing?])])
])dnl AX_REQUIRE_DEFINED
dnl pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
dnl serial 11 (pkg-config-0.29.1)
dnl
dnl Copyright © 2004 Scott James Remnant .
dnl Copyright © 2012-2015 Dan Nicholson
dnl
dnl This program is free software; you can redistribute it and/or modify
dnl it under the terms of the GNU General Public License as published by
dnl the Free Software Foundation; either version 2 of the License, or
dnl (at your option) any later version.
dnl
dnl This program is distributed in the hope that it will be useful, but
dnl WITHOUT ANY WARRANTY; without even the implied warranty of
dnl MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
dnl General Public License for more details.
dnl
dnl You should have received a copy of the GNU General Public License
dnl along with this program; if not, write to the Free Software
dnl Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
dnl 02111-1307, USA.
dnl
dnl As a special exception to the GNU General Public License, if you
dnl distribute this file as part of a program that contains a
dnl configuration script generated by Autoconf, you may include it under
dnl the same distribution terms that you use for the rest of that
dnl program.
dnl PKG_PREREQ(MIN-VERSION)
dnl -----------------------
dnl Since: 0.29
dnl
dnl Verify that the version of the pkg-config macros are at least
dnl MIN-VERSION. Unlike PKG_PROG_PKG_CONFIG, which checks the user's
dnl installed version of pkg-config, this checks the developer's version
dnl of pkg.m4 when generating configure.
dnl
dnl To ensure that this macro is defined, also add:
dnl m4_ifndef([PKG_PREREQ],
dnl [m4_fatal([must install pkg-config 0.29 or later before running autoconf/autogen])])
dnl
dnl See the "Since" comment for each macro you use to see what version
dnl of the macros you require.
m4_defun([PKG_PREREQ],
[m4_define([PKG_MACROS_VERSION], [0.29.1])
m4_if(m4_version_compare(PKG_MACROS_VERSION, [$1]), -1,
[m4_fatal([pkg.m4 version $1 or higher is required but ]PKG_MACROS_VERSION[ found])])
])dnl PKG_PREREQ
dnl PKG_PROG_PKG_CONFIG([MIN-VERSION])
dnl ----------------------------------
dnl Since: 0.16
dnl
dnl Search for the pkg-config tool and set the PKG_CONFIG variable to
dnl first found in the path. Checks that the version of pkg-config found
dnl is at least MIN-VERSION. If MIN-VERSION is not specified, 0.9.0 is
dnl used since that's the first version where most current features of
dnl pkg-config existed.
AC_DEFUN([PKG_PROG_PKG_CONFIG],
[m4_pattern_forbid([^_?PKG_[A-Z_]+$])
m4_pattern_allow([^PKG_CONFIG(_(PATH|LIBDIR|SYSROOT_DIR|ALLOW_SYSTEM_(CFLAGS|LIBS)))?$])
m4_pattern_allow([^PKG_CONFIG_(DISABLE_UNINSTALLED|TOP_BUILD_DIR|DEBUG_SPEW)$])
AC_ARG_VAR([PKG_CONFIG], [path to pkg-config utility])
AC_ARG_VAR([PKG_CONFIG_PATH], [directories to add to pkg-config's search path])
AC_ARG_VAR([PKG_CONFIG_LIBDIR], [path overriding pkg-config's built-in search path])
if test "x$ac_cv_env_PKG_CONFIG_set" != "xset"; then
AC_PATH_TOOL([PKG_CONFIG], [pkg-config])
fi
if test -n "$PKG_CONFIG"; then
_pkg_min_version=m4_default([$1], [0.9.0])
AC_MSG_CHECKING([pkg-config is at least version $_pkg_min_version])
if $PKG_CONFIG --atleast-pkgconfig-version $_pkg_min_version; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
PKG_CONFIG=""
fi
fi[]dnl
])dnl PKG_PROG_PKG_CONFIG
dnl PKG_CHECK_EXISTS(MODULES, [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------------------------------
dnl Since: 0.18
dnl
dnl Check to see whether a particular set of modules exists. Similar to
dnl PKG_CHECK_MODULES(), but does not set variables or print errors.
dnl
dnl Please remember that m4 expands AC_REQUIRE([PKG_PROG_PKG_CONFIG])
dnl only at the first occurence in configure.ac, so if the first place
dnl it's called might be skipped (such as if it is within an "if", you
dnl have to call PKG_CHECK_EXISTS manually
AC_DEFUN([PKG_CHECK_EXISTS],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
if test -n "$PKG_CONFIG" && \
AC_RUN_LOG([$PKG_CONFIG --exists --print-errors "$1"]); then
m4_default([$2], [:])
m4_ifvaln([$3], [else
$3])dnl
fi])
dnl _PKG_CONFIG([VARIABLE], [COMMAND], [MODULES])
dnl ---------------------------------------------
dnl Internal wrapper calling pkg-config via PKG_CONFIG and setting
dnl pkg_failed based on the result.
m4_define([_PKG_CONFIG],
[if test -n "$$1"; then
pkg_cv_[]$1="$$1"
elif test -n "$PKG_CONFIG"; then
PKG_CHECK_EXISTS([$3],
[pkg_cv_[]$1=`$PKG_CONFIG --[]$2 "$3" 2>/dev/null`
test "x$?" != "x0" && pkg_failed=yes ],
[pkg_failed=yes])
else
pkg_failed=untried
fi[]dnl
])dnl _PKG_CONFIG
dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl ---------------------------
dnl Internal check to see if pkg-config supports short errors.
AC_DEFUN([_PKG_SHORT_ERRORS_SUPPORTED],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])
if $PKG_CONFIG --atleast-pkgconfig-version 0.20; then
_pkg_short_errors_supported=yes
else
_pkg_short_errors_supported=no
fi[]dnl
])dnl _PKG_SHORT_ERRORS_SUPPORTED
dnl PKG_CHECK_MODULES(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl --------------------------------------------------------------
dnl Since: 0.4.0
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES might not happen, you should be sure to include an
dnl explicit call to PKG_PROG_PKG_CONFIG in your configure.ac
AC_DEFUN([PKG_CHECK_MODULES],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1][_CFLAGS], [C compiler flags for $1, overriding pkg-config])dnl
AC_ARG_VAR([$1][_LIBS], [linker flags for $1, overriding pkg-config])dnl
pkg_failed=no
AC_MSG_CHECKING([for $1])
_PKG_CONFIG([$1][_CFLAGS], [cflags], [$2])
_PKG_CONFIG([$1][_LIBS], [libs], [$2])
m4_define([_PKG_TEXT], [Alternatively, you may set the environment variables $1[]_CFLAGS
and $1[]_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.])
if test $pkg_failed = yes; then
AC_MSG_RESULT([no])
_PKG_SHORT_ERRORS_SUPPORTED
if test $_pkg_short_errors_supported = yes; then
$1[]_PKG_ERRORS=`$PKG_CONFIG --short-errors --print-errors --cflags --libs "$2" 2>&1`
else
$1[]_PKG_ERRORS=`$PKG_CONFIG --print-errors --cflags --libs "$2" 2>&1`
fi
# Put the nasty error message in config.log where it belongs
echo "$$1[]_PKG_ERRORS" >&AS_MESSAGE_LOG_FD
m4_default([$4], [AC_MSG_ERROR(
[Package requirements ($2) were not met:
$$1_PKG_ERRORS
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
_PKG_TEXT])[]dnl
])
elif test $pkg_failed = untried; then
AC_MSG_RESULT([no])
m4_default([$4], [AC_MSG_FAILURE(
[The pkg-config script could not be found or is too old. Make sure it
is in your PATH or set the PKG_CONFIG environment variable to the full
path to pkg-config.
_PKG_TEXT
To get pkg-config, see .])[]dnl
])
else
$1[]_CFLAGS=$pkg_cv_[]$1[]_CFLAGS
$1[]_LIBS=$pkg_cv_[]$1[]_LIBS
AC_MSG_RESULT([yes])
$3
fi[]dnl
])dnl PKG_CHECK_MODULES
dnl PKG_CHECK_MODULES_STATIC(VARIABLE-PREFIX, MODULES, [ACTION-IF-FOUND],
dnl [ACTION-IF-NOT-FOUND])
dnl ---------------------------------------------------------------------
dnl Since: 0.29
dnl
dnl Checks for existence of MODULES and gathers its build flags with
dnl static libraries enabled. Sets VARIABLE-PREFIX_CFLAGS from --cflags
dnl and VARIABLE-PREFIX_LIBS from --libs.
dnl
dnl Note that if there is a possibility the first call to
dnl PKG_CHECK_MODULES_STATIC might not happen, you should be sure to
dnl include an explicit call to PKG_PROG_PKG_CONFIG in your
dnl configure.ac.
AC_DEFUN([PKG_CHECK_MODULES_STATIC],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
_save_PKG_CONFIG=$PKG_CONFIG
PKG_CONFIG="$PKG_CONFIG --static"
PKG_CHECK_MODULES($@)
PKG_CONFIG=$_save_PKG_CONFIG[]dnl
])dnl PKG_CHECK_MODULES_STATIC
dnl PKG_INSTALLDIR([DIRECTORY])
dnl -------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable pkgconfigdir as the location where a module
dnl should install pkg-config .pc files. By default the directory is
dnl $libdir/pkgconfig, but the default can be changed by passing
dnl DIRECTORY. The user can override through the --with-pkgconfigdir
dnl parameter.
AC_DEFUN([PKG_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${libdir}/pkgconfig'])])
m4_pushdef([pkg_description],
[pkg-config installation directory @<:@]pkg_default[@:>@])
AC_ARG_WITH([pkgconfigdir],
[AS_HELP_STRING([--with-pkgconfigdir], pkg_description)],,
[with_pkgconfigdir=]pkg_default)
AC_SUBST([pkgconfigdir], [$with_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_INSTALLDIR
dnl PKG_NOARCH_INSTALLDIR([DIRECTORY])
dnl --------------------------------
dnl Since: 0.27
dnl
dnl Substitutes the variable noarch_pkgconfigdir as the location where a
dnl module should install arch-independent pkg-config .pc files. By
dnl default the directory is $datadir/pkgconfig, but the default can be
dnl changed by passing DIRECTORY. The user can override through the
dnl --with-noarch-pkgconfigdir parameter.
AC_DEFUN([PKG_NOARCH_INSTALLDIR],
[m4_pushdef([pkg_default], [m4_default([$1], ['${datadir}/pkgconfig'])])
m4_pushdef([pkg_description],
[pkg-config arch-independent installation directory @<:@]pkg_default[@:>@])
AC_ARG_WITH([noarch-pkgconfigdir],
[AS_HELP_STRING([--with-noarch-pkgconfigdir], pkg_description)],,
[with_noarch_pkgconfigdir=]pkg_default)
AC_SUBST([noarch_pkgconfigdir], [$with_noarch_pkgconfigdir])
m4_popdef([pkg_default])
m4_popdef([pkg_description])
])dnl PKG_NOARCH_INSTALLDIR
dnl PKG_CHECK_VAR(VARIABLE, MODULE, CONFIG-VARIABLE,
dnl [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND])
dnl -------------------------------------------
dnl Since: 0.28
dnl
dnl Retrieves the value of the pkg-config variable for the given module.
AC_DEFUN([PKG_CHECK_VAR],
[AC_REQUIRE([PKG_PROG_PKG_CONFIG])dnl
AC_ARG_VAR([$1], [value of $3 for $2, overriding pkg-config])dnl
_PKG_CONFIG([$1], [variable="][$3]["], [$2])
AS_VAR_COPY([$1], [pkg_cv_][$1])
AS_VAR_IF([$1], [""], [$5], [$4])dnl
])dnl PKG_CHECK_VAR
# Copyright (C) 2002-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_AUTOMAKE_VERSION(VERSION)
# ----------------------------
# Automake X.Y traces this macro to ensure aclocal.m4 has been
# generated from the m4 files accompanying Automake X.Y.
# (This private macro should not be called outside this file.)
AC_DEFUN([AM_AUTOMAKE_VERSION],
[am__api_version='1.15'
dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to
dnl require some minimum version. Point them to the right macro.
m4_if([$1], [1.15.1], [],
[AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl
])
# _AM_AUTOCONF_VERSION(VERSION)
# -----------------------------
# aclocal traces this macro to find the Autoconf version.
# This is a private macro too. Using m4_define simplifies
# the logic in aclocal, which can simply ignore this definition.
m4_define([_AM_AUTOCONF_VERSION], [])
# AM_SET_CURRENT_AUTOMAKE_VERSION
# -------------------------------
# Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced.
# This function is AC_REQUIREd by AM_INIT_AUTOMAKE.
AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION],
[AM_AUTOMAKE_VERSION([1.15.1])dnl
m4_ifndef([AC_AUTOCONF_VERSION],
[m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl
_AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))])
# AM_AUX_DIR_EXPAND -*- Autoconf -*-
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets
# $ac_aux_dir to '$srcdir/foo'. In other projects, it is set to
# '$srcdir', '$srcdir/..', or '$srcdir/../..'.
#
# Of course, Automake must honor this variable whenever it calls a
# tool from the auxiliary directory. The problem is that $srcdir (and
# therefore $ac_aux_dir as well) can be either absolute or relative,
# depending on how configure is run. This is pretty annoying, since
# it makes $ac_aux_dir quite unusable in subdirectories: in the top
# source directory, any form will work fine, but in subdirectories a
# relative path needs to be adjusted first.
#
# $ac_aux_dir/missing
# fails when called from a subdirectory if $ac_aux_dir is relative
# $top_srcdir/$ac_aux_dir/missing
# fails if $ac_aux_dir is absolute,
# fails when called from a subdirectory in a VPATH build with
# a relative $ac_aux_dir
#
# The reason of the latter failure is that $top_srcdir and $ac_aux_dir
# are both prefixed by $srcdir. In an in-source build this is usually
# harmless because $srcdir is '.', but things will broke when you
# start a VPATH build or use an absolute $srcdir.
#
# So we could use something similar to $top_srcdir/$ac_aux_dir/missing,
# iff we strip the leading $srcdir from $ac_aux_dir. That would be:
# am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"`
# and then we would define $MISSING as
# MISSING="\${SHELL} $am_aux_dir/missing"
# This will work as long as MISSING is not called from configure, because
# unfortunately $(top_srcdir) has no meaning in configure.
# However there are other variables, like CC, which are often used in
# configure, and could therefore not use this "fixed" $ac_aux_dir.
#
# Another solution, used here, is to always expand $ac_aux_dir to an
# absolute PATH. The drawback is that using absolute paths prevent a
# configured tree to be moved without reconfiguration.
AC_DEFUN([AM_AUX_DIR_EXPAND],
[AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl
# Expand $ac_aux_dir to an absolute path.
am_aux_dir=`cd "$ac_aux_dir" && pwd`
])
# AM_CONDITIONAL -*- Autoconf -*-
# Copyright (C) 1997-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_CONDITIONAL(NAME, SHELL-CONDITION)
# -------------------------------------
# Define a conditional.
AC_DEFUN([AM_CONDITIONAL],
[AC_PREREQ([2.52])dnl
m4_if([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])],
[$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl
AC_SUBST([$1_TRUE])dnl
AC_SUBST([$1_FALSE])dnl
_AM_SUBST_NOTMAKE([$1_TRUE])dnl
_AM_SUBST_NOTMAKE([$1_FALSE])dnl
m4_define([_AM_COND_VALUE_$1], [$2])dnl
if $2; then
$1_TRUE=
$1_FALSE='#'
else
$1_TRUE='#'
$1_FALSE=
fi
AC_CONFIG_COMMANDS_PRE(
[if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then
AC_MSG_ERROR([[conditional "$1" was never defined.
Usually this means the macro was only invoked conditionally.]])
fi])])
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# There are a few dirty hacks below to avoid letting 'AC_PROG_CC' be
# written in clear, in which case automake, when reading aclocal.m4,
# will think it sees a *use*, and therefore will trigger all it's
# C support machinery. Also note that it means that autoscan, seeing
# CC etc. in the Makefile, will ask for an AC_PROG_CC use...
# _AM_DEPENDENCIES(NAME)
# ----------------------
# See how the compiler implements dependency checking.
# NAME is "CC", "CXX", "OBJC", "OBJCXX", "UPC", or "GJC".
# We try a few techniques and use that to set a single cache variable.
#
# We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was
# modified to invoke _AM_DEPENDENCIES(CC); we would have a circular
# dependency, and given that the user is not expected to run this macro,
# just rely on AC_PROG_CC.
AC_DEFUN([_AM_DEPENDENCIES],
[AC_REQUIRE([AM_SET_DEPDIR])dnl
AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl
AC_REQUIRE([AM_MAKE_INCLUDE])dnl
AC_REQUIRE([AM_DEP_TRACK])dnl
m4_if([$1], [CC], [depcc="$CC" am_compiler_list=],
[$1], [CXX], [depcc="$CXX" am_compiler_list=],
[$1], [OBJC], [depcc="$OBJC" am_compiler_list='gcc3 gcc'],
[$1], [OBJCXX], [depcc="$OBJCXX" am_compiler_list='gcc3 gcc'],
[$1], [UPC], [depcc="$UPC" am_compiler_list=],
[$1], [GCJ], [depcc="$GCJ" am_compiler_list='gcc3 gcc'],
[depcc="$$1" am_compiler_list=])
AC_CACHE_CHECK([dependency style of $depcc],
[am_cv_$1_dependencies_compiler_type],
[if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then
# We make a subdir and do the tests there. Otherwise we can end up
# making bogus files that we don't know about and never remove. For
# instance it was reported that on HP-UX the gcc test will end up
# making a dummy file named 'D' -- because '-MD' means "put the output
# in D".
rm -rf conftest.dir
mkdir conftest.dir
# Copy depcomp to subdir because otherwise we won't find it if we're
# using a relative directory.
cp "$am_depcomp" conftest.dir
cd conftest.dir
# We will build objects and dependencies in a subdirectory because
# it helps to detect inapplicable dependency modes. For instance
# both Tru64's cc and ICC support -MD to output dependencies as a
# side effect of compilation, but ICC will put the dependencies in
# the current directory while Tru64 will put them in the object
# directory.
mkdir sub
am_cv_$1_dependencies_compiler_type=none
if test "$am_compiler_list" = ""; then
am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp`
fi
am__universal=false
m4_case([$1], [CC],
[case " $depcc " in #(
*\ -arch\ *\ -arch\ *) am__universal=true ;;
esac],
[CXX],
[case " $depcc " in #(
*\ -arch\ *\ -arch\ *) am__universal=true ;;
esac])
for depmode in $am_compiler_list; do
# Setup a source with many dependencies, because some compilers
# like to wrap large dependency lists on column 80 (with \), and
# we should not choose a depcomp mode which is confused by this.
#
# We need to recreate these files for each test, as the compiler may
# overwrite some of them when testing with obscure command lines.
# This happens at least with the AIX C compiler.
: > sub/conftest.c
for i in 1 2 3 4 5 6; do
echo '#include "conftst'$i'.h"' >> sub/conftest.c
# Using ": > sub/conftst$i.h" creates only sub/conftst1.h with
# Solaris 10 /bin/sh.
echo '/* dummy */' > sub/conftst$i.h
done
echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf
# We check with '-c' and '-o' for the sake of the "dashmstdout"
# mode. It turns out that the SunPro C++ compiler does not properly
# handle '-M -o', and we need to detect this. Also, some Intel
# versions had trouble with output in subdirs.
am__obj=sub/conftest.${OBJEXT-o}
am__minus_obj="-o $am__obj"
case $depmode in
gcc)
# This depmode causes a compiler race in universal mode.
test "$am__universal" = false || continue
;;
nosideeffect)
# After this tag, mechanisms are not by side-effect, so they'll
# only be used when explicitly requested.
if test "x$enable_dependency_tracking" = xyes; then
continue
else
break
fi
;;
msvc7 | msvc7msys | msvisualcpp | msvcmsys)
# This compiler won't grok '-c -o', but also, the minuso test has
# not run yet. These depmodes are late enough in the game, and
# so weak that their functioning should not be impacted.
am__obj=conftest.${OBJEXT-o}
am__minus_obj=
;;
none) break ;;
esac
if depmode=$depmode \
source=sub/conftest.c object=$am__obj \
depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \
$SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \
>/dev/null 2>conftest.err &&
grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 &&
grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 &&
grep $am__obj sub/conftest.Po > /dev/null 2>&1 &&
${MAKE-make} -s -f confmf > /dev/null 2>&1; then
# icc doesn't choke on unknown options, it will just issue warnings
# or remarks (even with -Werror). So we grep stderr for any message
# that says an option was ignored or not supported.
# When given -MP, icc 7.0 and 7.1 complain thusly:
# icc: Command line warning: ignoring option '-M'; no argument required
# The diagnosis changed in icc 8.0:
# icc: Command line remark: option '-MP' not supported
if (grep 'ignoring option' conftest.err ||
grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else
am_cv_$1_dependencies_compiler_type=$depmode
break
fi
fi
done
cd ..
rm -rf conftest.dir
else
am_cv_$1_dependencies_compiler_type=none
fi
])
AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type])
AM_CONDITIONAL([am__fastdep$1], [
test "x$enable_dependency_tracking" != xno \
&& test "$am_cv_$1_dependencies_compiler_type" = gcc3])
])
# AM_SET_DEPDIR
# -------------
# Choose a directory name for dependency files.
# This macro is AC_REQUIREd in _AM_DEPENDENCIES.
AC_DEFUN([AM_SET_DEPDIR],
[AC_REQUIRE([AM_SET_LEADING_DOT])dnl
AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl
])
# AM_DEP_TRACK
# ------------
AC_DEFUN([AM_DEP_TRACK],
[AC_ARG_ENABLE([dependency-tracking], [dnl
AS_HELP_STRING(
[--enable-dependency-tracking],
[do not reject slow dependency extractors])
AS_HELP_STRING(
[--disable-dependency-tracking],
[speeds up one-time build])])
if test "x$enable_dependency_tracking" != xno; then
am_depcomp="$ac_aux_dir/depcomp"
AMDEPBACKSLASH='\'
am__nodep='_no'
fi
AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno])
AC_SUBST([AMDEPBACKSLASH])dnl
_AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl
AC_SUBST([am__nodep])dnl
_AM_SUBST_NOTMAKE([am__nodep])dnl
])
# Generate code to set up dependency tracking. -*- Autoconf -*-
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_OUTPUT_DEPENDENCY_COMMANDS
# ------------------------------
AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS],
[{
# Older Autoconf quotes --file arguments for eval, but not when files
# are listed without --file. Let's play safe and only enable the eval
# if we detect the quoting.
case $CONFIG_FILES in
*\'*) eval set x "$CONFIG_FILES" ;;
*) set x $CONFIG_FILES ;;
esac
shift
for mf
do
# Strip MF so we end up with the name of the file.
mf=`echo "$mf" | sed -e 's/:.*$//'`
# Check whether this is an Automake generated Makefile or not.
# We used to match only the files named 'Makefile.in', but
# some people rename them; so instead we look at the file content.
# Grep'ing the first line is not enough: some people post-process
# each Makefile.in and add a new line on top of each file to say so.
# Grep'ing the whole file is not good either: AIX grep has a line
# limit of 2048, but all sed's we know have understand at least 4000.
if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then
dirpart=`AS_DIRNAME("$mf")`
else
continue
fi
# Extract the definition of DEPDIR, am__include, and am__quote
# from the Makefile without running 'make'.
DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"`
test -z "$DEPDIR" && continue
am__include=`sed -n 's/^am__include = //p' < "$mf"`
test -z "$am__include" && continue
am__quote=`sed -n 's/^am__quote = //p' < "$mf"`
# Find all dependency output files, they are included files with
# $(DEPDIR) in their names. We invoke sed twice because it is the
# simplest approach to changing $(DEPDIR) to its actual value in the
# expansion.
for file in `sed -n "
s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \
sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g'`; do
# Make sure the directory exists.
test -f "$dirpart/$file" && continue
fdir=`AS_DIRNAME(["$file"])`
AS_MKDIR_P([$dirpart/$fdir])
# echo "creating $dirpart/$file"
echo '# dummy' > "$dirpart/$file"
done
done
}
])# _AM_OUTPUT_DEPENDENCY_COMMANDS
# AM_OUTPUT_DEPENDENCY_COMMANDS
# -----------------------------
# This macro should only be invoked once -- use via AC_REQUIRE.
#
# This code is only required when automatic dependency tracking
# is enabled. FIXME. This creates each '.P' file that we will
# need in order to bootstrap the dependency handling code.
AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS],
[AC_CONFIG_COMMANDS([depfiles],
[test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS],
[AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"])
])
# Do all the work for Automake. -*- Autoconf -*-
# Copyright (C) 1996-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This macro actually does too much. Some checks are only needed if
# your package does certain things. But this isn't really a big deal.
dnl Redefine AC_PROG_CC to automatically invoke _AM_PROG_CC_C_O.
m4_define([AC_PROG_CC],
m4_defn([AC_PROG_CC])
[_AM_PROG_CC_C_O
])
# AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE])
# AM_INIT_AUTOMAKE([OPTIONS])
# -----------------------------------------------
# The call with PACKAGE and VERSION arguments is the old style
# call (pre autoconf-2.50), which is being phased out. PACKAGE
# and VERSION should now be passed to AC_INIT and removed from
# the call to AM_INIT_AUTOMAKE.
# We support both call styles for the transition. After
# the next Automake release, Autoconf can make the AC_INIT
# arguments mandatory, and then we can depend on a new Autoconf
# release and drop the old call support.
AC_DEFUN([AM_INIT_AUTOMAKE],
[AC_PREREQ([2.65])dnl
dnl Autoconf wants to disallow AM_ names. We explicitly allow
dnl the ones we care about.
m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl
AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl
AC_REQUIRE([AC_PROG_INSTALL])dnl
if test "`cd $srcdir && pwd`" != "`pwd`"; then
# Use -I$(srcdir) only when $(srcdir) != ., so that make's output
# is not polluted with repeated "-I."
AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl
# test to see if srcdir already configured
if test -f $srcdir/config.status; then
AC_MSG_ERROR([source directory already configured; run "make distclean" there first])
fi
fi
# test whether we have cygpath
if test -z "$CYGPATH_W"; then
if (cygpath --version) >/dev/null 2>/dev/null; then
CYGPATH_W='cygpath -w'
else
CYGPATH_W=echo
fi
fi
AC_SUBST([CYGPATH_W])
# Define the identity of the package.
dnl Distinguish between old-style and new-style calls.
m4_ifval([$2],
[AC_DIAGNOSE([obsolete],
[$0: two- and three-arguments forms are deprecated.])
m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl
AC_SUBST([PACKAGE], [$1])dnl
AC_SUBST([VERSION], [$2])],
[_AM_SET_OPTIONS([$1])dnl
dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT.
m4_if(
m4_ifdef([AC_PACKAGE_NAME], [ok]):m4_ifdef([AC_PACKAGE_VERSION], [ok]),
[ok:ok],,
[m4_fatal([AC_INIT should be called with package and version arguments])])dnl
AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl
AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl
_AM_IF_OPTION([no-define],,
[AC_DEFINE_UNQUOTED([PACKAGE], ["$PACKAGE"], [Name of package])
AC_DEFINE_UNQUOTED([VERSION], ["$VERSION"], [Version number of package])])dnl
# Some tools Automake needs.
AC_REQUIRE([AM_SANITY_CHECK])dnl
AC_REQUIRE([AC_ARG_PROGRAM])dnl
AM_MISSING_PROG([ACLOCAL], [aclocal-${am__api_version}])
AM_MISSING_PROG([AUTOCONF], [autoconf])
AM_MISSING_PROG([AUTOMAKE], [automake-${am__api_version}])
AM_MISSING_PROG([AUTOHEADER], [autoheader])
AM_MISSING_PROG([MAKEINFO], [makeinfo])
AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl
AC_REQUIRE([AC_PROG_MKDIR_P])dnl
# For better backward compatibility. To be removed once Automake 1.9.x
# dies out for good. For more background, see:
#
#
AC_SUBST([mkdir_p], ['$(MKDIR_P)'])
# We need awk for the "check" target (and possibly the TAP driver). The
# system "awk" is bad on some platforms.
AC_REQUIRE([AC_PROG_AWK])dnl
AC_REQUIRE([AC_PROG_MAKE_SET])dnl
AC_REQUIRE([AM_SET_LEADING_DOT])dnl
_AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])],
[_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])],
[_AM_PROG_TAR([v7])])])
_AM_IF_OPTION([no-dependencies],,
[AC_PROVIDE_IFELSE([AC_PROG_CC],
[_AM_DEPENDENCIES([CC])],
[m4_define([AC_PROG_CC],
m4_defn([AC_PROG_CC])[_AM_DEPENDENCIES([CC])])])dnl
AC_PROVIDE_IFELSE([AC_PROG_CXX],
[_AM_DEPENDENCIES([CXX])],
[m4_define([AC_PROG_CXX],
m4_defn([AC_PROG_CXX])[_AM_DEPENDENCIES([CXX])])])dnl
AC_PROVIDE_IFELSE([AC_PROG_OBJC],
[_AM_DEPENDENCIES([OBJC])],
[m4_define([AC_PROG_OBJC],
m4_defn([AC_PROG_OBJC])[_AM_DEPENDENCIES([OBJC])])])dnl
AC_PROVIDE_IFELSE([AC_PROG_OBJCXX],
[_AM_DEPENDENCIES([OBJCXX])],
[m4_define([AC_PROG_OBJCXX],
m4_defn([AC_PROG_OBJCXX])[_AM_DEPENDENCIES([OBJCXX])])])dnl
])
AC_REQUIRE([AM_SILENT_RULES])dnl
dnl The testsuite driver may need to know about EXEEXT, so add the
dnl 'am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This
dnl macro is hooked onto _AC_COMPILER_EXEEXT early, see below.
AC_CONFIG_COMMANDS_PRE(dnl
[m4_provide_if([_AM_COMPILER_EXEEXT],
[AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl
# POSIX will say in a future version that running "rm -f" with no argument
# is OK; and we want to be able to make that assumption in our Makefile
# recipes. So use an aggressive probe to check that the usage we want is
# actually supported "in the wild" to an acceptable degree.
# See automake bug#10828.
# To make any issue more visible, cause the running configure to be aborted
# by default if the 'rm' program in use doesn't match our expectations; the
# user can still override this though.
if rm -f && rm -fr && rm -rf; then : OK; else
cat >&2 <<'END'
Oops!
Your 'rm' program seems unable to run without file operands specified
on the command line, even when the '-f' option is present. This is contrary
to the behaviour of most rm programs out there, and not conforming with
the upcoming POSIX standard:
Please tell bug-automake@gnu.org about your system, including the value
of your $PATH and any error possibly output before this message. This
can help us improve future automake versions.
END
if test x"$ACCEPT_INFERIOR_RM_PROGRAM" = x"yes"; then
echo 'Configuration will proceed anyway, since you have set the' >&2
echo 'ACCEPT_INFERIOR_RM_PROGRAM variable to "yes"' >&2
echo >&2
else
cat >&2 <<'END'
Aborting the configuration process, to ensure you take notice of the issue.
You can download and install GNU coreutils to get an 'rm' implementation
that behaves properly: .
If you want to complete the configuration process using your problematic
'rm' anyway, export the environment variable ACCEPT_INFERIOR_RM_PROGRAM
to "yes", and re-run configure.
END
AC_MSG_ERROR([Your 'rm' program is bad, sorry.])
fi
fi
dnl The trailing newline in this macro's definition is deliberate, for
dnl backward compatibility and to allow trailing 'dnl'-style comments
dnl after the AM_INIT_AUTOMAKE invocation. See automake bug#16841.
])
dnl Hook into '_AC_COMPILER_EXEEXT' early to learn its expansion. Do not
dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further
dnl mangled by Autoconf and run in a shell conditional statement.
m4_define([_AC_COMPILER_EXEEXT],
m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])])
# When config.status generates a header, we must update the stamp-h file.
# This file resides in the same directory as the config header
# that is generated. The stamp files are numbered to have different names.
# Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the
# loop where config.status creates the headers, so we can generate
# our stamp files there.
AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK],
[# Compute $1's index in $config_headers.
_am_arg=$1
_am_stamp_count=1
for _am_header in $config_headers :; do
case $_am_header in
$_am_arg | $_am_arg:* )
break ;;
* )
_am_stamp_count=`expr $_am_stamp_count + 1` ;;
esac
done
echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count])
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_INSTALL_SH
# ------------------
# Define $install_sh.
AC_DEFUN([AM_PROG_INSTALL_SH],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
if test x"${install_sh+set}" != xset; then
case $am_aux_dir in
*\ * | *\ *)
install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;;
*)
install_sh="\${SHELL} $am_aux_dir/install-sh"
esac
fi
AC_SUBST([install_sh])])
# Copyright (C) 2003-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# Check whether the underlying file-system supports filenames
# with a leading dot. For instance MS-DOS doesn't.
AC_DEFUN([AM_SET_LEADING_DOT],
[rm -rf .tst 2>/dev/null
mkdir .tst 2>/dev/null
if test -d .tst; then
am__leading_dot=.
else
am__leading_dot=_
fi
rmdir .tst 2>/dev/null
AC_SUBST([am__leading_dot])])
# Check to see how 'make' treats includes. -*- Autoconf -*-
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_MAKE_INCLUDE()
# -----------------
# Check to see how make treats includes.
AC_DEFUN([AM_MAKE_INCLUDE],
[am_make=${MAKE-make}
cat > confinc << 'END'
am__doit:
@echo this is the am__doit target
.PHONY: am__doit
END
# If we don't find an include directive, just comment out the code.
AC_MSG_CHECKING([for style of include used by $am_make])
am__include="#"
am__quote=
_am_result=none
# First try GNU make style include.
echo "include confinc" > confmf
# Ignore all kinds of additional output from 'make'.
case `$am_make -s -f confmf 2> /dev/null` in #(
*the\ am__doit\ target*)
am__include=include
am__quote=
_am_result=GNU
;;
esac
# Now try BSD make style include.
if test "$am__include" = "#"; then
echo '.include "confinc"' > confmf
case `$am_make -s -f confmf 2> /dev/null` in #(
*the\ am__doit\ target*)
am__include=.include
am__quote="\""
_am_result=BSD
;;
esac
fi
AC_SUBST([am__include])
AC_SUBST([am__quote])
AC_MSG_RESULT([$_am_result])
rm -f confinc confmf
])
# Fake the existence of programs that GNU maintainers use. -*- Autoconf -*-
# Copyright (C) 1997-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_MISSING_PROG(NAME, PROGRAM)
# ------------------------------
AC_DEFUN([AM_MISSING_PROG],
[AC_REQUIRE([AM_MISSING_HAS_RUN])
$1=${$1-"${am_missing_run}$2"}
AC_SUBST($1)])
# AM_MISSING_HAS_RUN
# ------------------
# Define MISSING if not defined so far and test if it is modern enough.
# If it is, set am_missing_run to use it, otherwise, to nothing.
AC_DEFUN([AM_MISSING_HAS_RUN],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
AC_REQUIRE_AUX_FILE([missing])dnl
if test x"${MISSING+set}" != xset; then
case $am_aux_dir in
*\ * | *\ *)
MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;;
*)
MISSING="\${SHELL} $am_aux_dir/missing" ;;
esac
fi
# Use eval to expand $SHELL
if eval "$MISSING --is-lightweight"; then
am_missing_run="$MISSING "
else
am_missing_run=
AC_MSG_WARN(['missing' script is too old or missing])
fi
])
# Helper functions for option handling. -*- Autoconf -*-
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_MANGLE_OPTION(NAME)
# -----------------------
AC_DEFUN([_AM_MANGLE_OPTION],
[[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])])
# _AM_SET_OPTION(NAME)
# --------------------
# Set option NAME. Presently that only means defining a flag for this option.
AC_DEFUN([_AM_SET_OPTION],
[m4_define(_AM_MANGLE_OPTION([$1]), [1])])
# _AM_SET_OPTIONS(OPTIONS)
# ------------------------
# OPTIONS is a space-separated list of Automake options.
AC_DEFUN([_AM_SET_OPTIONS],
[m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])])
# _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET])
# -------------------------------------------
# Execute IF-SET if OPTION is set, IF-NOT-SET otherwise.
AC_DEFUN([_AM_IF_OPTION],
[m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])])
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_PROG_CC_C_O
# ---------------
# Like AC_PROG_CC_C_O, but changed for automake. We rewrite AC_PROG_CC
# to automatically call this.
AC_DEFUN([_AM_PROG_CC_C_O],
[AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl
AC_REQUIRE_AUX_FILE([compile])dnl
AC_LANG_PUSH([C])dnl
AC_CACHE_CHECK(
[whether $CC understands -c and -o together],
[am_cv_prog_cc_c_o],
[AC_LANG_CONFTEST([AC_LANG_PROGRAM([])])
# Make sure it works both with $CC and with simple cc.
# Following AC_PROG_CC_C_O, we do the test twice because some
# compilers refuse to overwrite an existing .o file with -o,
# though they will create one.
am_cv_prog_cc_c_o=yes
for am_i in 1 2; do
if AM_RUN_LOG([$CC -c conftest.$ac_ext -o conftest2.$ac_objext]) \
&& test -f conftest2.$ac_objext; then
: OK
else
am_cv_prog_cc_c_o=no
break
fi
done
rm -f core conftest*
unset am_i])
if test "$am_cv_prog_cc_c_o" != yes; then
# Losing compiler, so override with the script.
# FIXME: It is wrong to rewrite CC.
# But if we don't then we get into trouble of one sort or another.
# A longer-term fix would be to have automake use am__CC in this case,
# and then we could set am__CC="\$(top_srcdir)/compile \$(CC)"
CC="$am_aux_dir/compile $CC"
fi
AC_LANG_POP([C])])
# For backward compatibility.
AC_DEFUN_ONCE([AM_PROG_CC_C_O], [AC_REQUIRE([AC_PROG_CC])])
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_RUN_LOG(COMMAND)
# -------------------
# Run COMMAND, save the exit status in ac_status, and log it.
# (This has been adapted from Autoconf's _AC_RUN_LOG macro.)
AC_DEFUN([AM_RUN_LOG],
[{ echo "$as_me:$LINENO: $1" >&AS_MESSAGE_LOG_FD
($1) >&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD
ac_status=$?
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
(exit $ac_status); }])
# Check to make sure that the build environment is sane. -*- Autoconf -*-
# Copyright (C) 1996-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_SANITY_CHECK
# ---------------
AC_DEFUN([AM_SANITY_CHECK],
[AC_MSG_CHECKING([whether build environment is sane])
# Reject unsafe characters in $srcdir or the absolute working directory
# name. Accept space and tab only in the latter.
am_lf='
'
case `pwd` in
*[[\\\"\#\$\&\'\`$am_lf]]*)
AC_MSG_ERROR([unsafe absolute working directory name]);;
esac
case $srcdir in
*[[\\\"\#\$\&\'\`$am_lf\ \ ]]*)
AC_MSG_ERROR([unsafe srcdir value: '$srcdir']);;
esac
# Do 'set' in a subshell so we don't clobber the current shell's
# arguments. Must try -L first in case configure is actually a
# symlink; some systems play weird games with the mod time of symlinks
# (eg FreeBSD returns the mod time of the symlink's containing
# directory).
if (
am_has_slept=no
for am_try in 1 2; do
echo "timestamp, slept: $am_has_slept" > conftest.file
set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null`
if test "$[*]" = "X"; then
# -L didn't work.
set X `ls -t "$srcdir/configure" conftest.file`
fi
if test "$[*]" != "X $srcdir/configure conftest.file" \
&& test "$[*]" != "X conftest.file $srcdir/configure"; then
# If neither matched, then we have a broken ls. This can happen
# if, for instance, CONFIG_SHELL is bash and it inherits a
# broken ls alias from the environment. This has actually
# happened. Such a system could not be considered "sane".
AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken
alias in your environment])
fi
if test "$[2]" = conftest.file || test $am_try -eq 2; then
break
fi
# Just in case.
sleep 1
am_has_slept=yes
done
test "$[2]" = conftest.file
)
then
# Ok.
:
else
AC_MSG_ERROR([newly created file is older than distributed files!
Check your system clock])
fi
AC_MSG_RESULT([yes])
# If we didn't sleep, we still need to ensure time stamps of config.status and
# generated files are strictly newer.
am_sleep_pid=
if grep 'slept: no' conftest.file >/dev/null 2>&1; then
( sleep 1 ) &
am_sleep_pid=$!
fi
AC_CONFIG_COMMANDS_PRE(
[AC_MSG_CHECKING([that generated files are newer than configure])
if test -n "$am_sleep_pid"; then
# Hide warnings about reused PIDs.
wait $am_sleep_pid 2>/dev/null
fi
AC_MSG_RESULT([done])])
rm -f conftest.file
])
# Copyright (C) 2009-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_SILENT_RULES([DEFAULT])
# --------------------------
# Enable less verbose build rules; with the default set to DEFAULT
# ("yes" being less verbose, "no" or empty being verbose).
AC_DEFUN([AM_SILENT_RULES],
[AC_ARG_ENABLE([silent-rules], [dnl
AS_HELP_STRING(
[--enable-silent-rules],
[less verbose build output (undo: "make V=1")])
AS_HELP_STRING(
[--disable-silent-rules],
[verbose build output (undo: "make V=0")])dnl
])
case $enable_silent_rules in @%:@ (((
yes) AM_DEFAULT_VERBOSITY=0;;
no) AM_DEFAULT_VERBOSITY=1;;
*) AM_DEFAULT_VERBOSITY=m4_if([$1], [yes], [0], [1]);;
esac
dnl
dnl A few 'make' implementations (e.g., NonStop OS and NextStep)
dnl do not support nested variable expansions.
dnl See automake bug#9928 and bug#10237.
am_make=${MAKE-make}
AC_CACHE_CHECK([whether $am_make supports nested variables],
[am_cv_make_support_nested_variables],
[if AS_ECHO([['TRUE=$(BAR$(V))
BAR0=false
BAR1=true
V=1
am__doit:
@$(TRUE)
.PHONY: am__doit']]) | $am_make -f - >/dev/null 2>&1; then
am_cv_make_support_nested_variables=yes
else
am_cv_make_support_nested_variables=no
fi])
if test $am_cv_make_support_nested_variables = yes; then
dnl Using '$V' instead of '$(V)' breaks IRIX make.
AM_V='$(V)'
AM_DEFAULT_V='$(AM_DEFAULT_VERBOSITY)'
else
AM_V=$AM_DEFAULT_VERBOSITY
AM_DEFAULT_V=$AM_DEFAULT_VERBOSITY
fi
AC_SUBST([AM_V])dnl
AM_SUBST_NOTMAKE([AM_V])dnl
AC_SUBST([AM_DEFAULT_V])dnl
AM_SUBST_NOTMAKE([AM_DEFAULT_V])dnl
AC_SUBST([AM_DEFAULT_VERBOSITY])dnl
AM_BACKSLASH='\'
AC_SUBST([AM_BACKSLASH])dnl
_AM_SUBST_NOTMAKE([AM_BACKSLASH])dnl
])
# Copyright (C) 2001-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# AM_PROG_INSTALL_STRIP
# ---------------------
# One issue with vendor 'install' (even GNU) is that you can't
# specify the program used to strip binaries. This is especially
# annoying in cross-compiling environments, where the build's strip
# is unlikely to handle the host's binaries.
# Fortunately install-sh will honor a STRIPPROG variable, so we
# always use install-sh in "make install-strip", and initialize
# STRIPPROG with the value of the STRIP variable (set by the user).
AC_DEFUN([AM_PROG_INSTALL_STRIP],
[AC_REQUIRE([AM_PROG_INSTALL_SH])dnl
# Installed binaries are usually stripped using 'strip' when the user
# run "make install-strip". However 'strip' might not be the right
# tool to use in cross-compilation environments, therefore Automake
# will honor the 'STRIP' environment variable to overrule this program.
dnl Don't test for $cross_compiling = yes, because it might be 'maybe'.
if test "$cross_compiling" != no; then
AC_CHECK_TOOL([STRIP], [strip], :)
fi
INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s"
AC_SUBST([INSTALL_STRIP_PROGRAM])])
# Copyright (C) 2006-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_SUBST_NOTMAKE(VARIABLE)
# ---------------------------
# Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in.
# This macro is traced by Automake.
AC_DEFUN([_AM_SUBST_NOTMAKE])
# AM_SUBST_NOTMAKE(VARIABLE)
# --------------------------
# Public sister of _AM_SUBST_NOTMAKE.
AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)])
# Check how to create a tarball. -*- Autoconf -*-
# Copyright (C) 2004-2017 Free Software Foundation, Inc.
#
# This file is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# _AM_PROG_TAR(FORMAT)
# --------------------
# Check how to create a tarball in format FORMAT.
# FORMAT should be one of 'v7', 'ustar', or 'pax'.
#
# Substitute a variable $(am__tar) that is a command
# writing to stdout a FORMAT-tarball containing the directory
# $tardir.
# tardir=directory && $(am__tar) > result.tar
#
# Substitute a variable $(am__untar) that extract such
# a tarball read from stdin.
# $(am__untar) < result.tar
#
AC_DEFUN([_AM_PROG_TAR],
[# Always define AMTAR for backward compatibility. Yes, it's still used
# in the wild :-( We should find a proper way to deprecate it ...
AC_SUBST([AMTAR], ['$${TAR-tar}'])
# We'll loop over all known methods to create a tar archive until one works.
_am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none'
m4_if([$1], [v7],
[am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'],
[m4_case([$1],
[ustar],
[# The POSIX 1988 'ustar' format is defined with fixed-size fields.
# There is notably a 21 bits limit for the UID and the GID. In fact,
# the 'pax' utility can hang on bigger UID/GID (see automake bug#8343
# and bug#13588).
am_max_uid=2097151 # 2^21 - 1
am_max_gid=$am_max_uid
# The $UID and $GID variables are not portable, so we need to resort
# to the POSIX-mandated id(1) utility. Errors in the 'id' calls
# below are definitely unexpected, so allow the users to see them
# (that is, avoid stderr redirection).
am_uid=`id -u || echo unknown`
am_gid=`id -g || echo unknown`
AC_MSG_CHECKING([whether UID '$am_uid' is supported by ustar format])
if test $am_uid -le $am_max_uid; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
_am_tools=none
fi
AC_MSG_CHECKING([whether GID '$am_gid' is supported by ustar format])
if test $am_gid -le $am_max_gid; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
_am_tools=none
fi],
[pax],
[],
[m4_fatal([Unknown tar format])])
AC_MSG_CHECKING([how to create a $1 tar archive])
# Go ahead even if we have the value already cached. We do so because we
# need to set the values for the 'am__tar' and 'am__untar' variables.
_am_tools=${am_cv_prog_tar_$1-$_am_tools}
for _am_tool in $_am_tools; do
case $_am_tool in
gnutar)
for _am_tar in tar gnutar gtar; do
AM_RUN_LOG([$_am_tar --version]) && break
done
am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"'
am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"'
am__untar="$_am_tar -xf -"
;;
plaintar)
# Must skip GNU tar: if it does not support --format= it doesn't create
# ustar tarball either.
(tar --version) >/dev/null 2>&1 && continue
am__tar='tar chf - "$$tardir"'
am__tar_='tar chf - "$tardir"'
am__untar='tar xf -'
;;
pax)
am__tar='pax -L -x $1 -w "$$tardir"'
am__tar_='pax -L -x $1 -w "$tardir"'
am__untar='pax -r'
;;
cpio)
am__tar='find "$$tardir" -print | cpio -o -H $1 -L'
am__tar_='find "$tardir" -print | cpio -o -H $1 -L'
am__untar='cpio -i -H $1 -d'
;;
none)
am__tar=false
am__tar_=false
am__untar=false
;;
esac
# If the value was cached, stop now. We just wanted to have am__tar
# and am__untar set.
test -n "${am_cv_prog_tar_$1}" && break
# tar/untar a dummy directory, and stop if the command works.
rm -rf conftest.dir
mkdir conftest.dir
echo GrepMe > conftest.dir/file
AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar])
rm -rf conftest.dir
if test -s conftest.tar; then
AM_RUN_LOG([$am__untar /dev/null 2>&1 && break
fi
done
rm -rf conftest.dir
AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool])
AC_MSG_RESULT([$am_cv_prog_tar_$1])])
AC_SUBST([am__tar])
AC_SUBST([am__untar])
]) # _AM_PROG_TAR
m4_include([m4/libtool.m4])
m4_include([m4/ltoptions.m4])
m4_include([m4/ltsugar.m4])
m4_include([m4/ltversion.m4])
m4_include([m4/lt~obsolete.m4])
liblognorm-2.0.6/tests/ 0000755 0001750 0001750 00000000000 13370251163 012027 5 0000000 0000000 liblognorm-2.0.6/tests/usrdef_ipaddr.sh 0000755 0001750 0001750 00000001110 13370250152 015107 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple user-defined type"
add_rule 'version=2'
add_rule 'type=@IPaddr:%ip:ipv4%'
add_rule 'type=@IPaddr:%ip:ipv6%'
add_rule 'rule=:an ip address %.:@IPaddr%'
execute 'an ip address 10.0.0.1'
assert_output_json_eq '{ "ip": "10.0.0.1" }'
execute 'an ip address 127::1'
assert_output_json_eq '{ "ip": "127::1" }'
execute 'an ip address 2001:DB8:0:1::10:1FF'
assert_output_json_eq '{ "ip": "2001:DB8:0:1::10:1FF" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cef_jsoncnf.sh 0000755 0001750 0001750 00000023152 13370250152 015726 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "CEF parser"
add_rule 'version=2'
add_rule 'rule=:%{"name":"f", "type":"cef"}%'
# fabricated tests to test specific functionality
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product\|1\|\\|Version|Signature ID|some name|Severity| aa=field1 bb=this is a name\=value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product|1|\\", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a name=value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \= value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a = value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity|'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=value'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "name": "value" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=val\nue' # embedded LF
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "name": "val\nue" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value' #invalid punctuation in extension
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value" }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\alue' #invalid escape in extension
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\\alue", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\\alue" }'
execute 'CEF:0|V\endor|Product|Version|Signature ID|some name|Severity| name=value' #invalid escape in header
assert_output_json_eq '{ "originalmsg": "CEF:0|V\\endor|Product|Version|Signature ID|some name|Severity| name=value", "unparsed-data": "CEF:0|V\\endor|Product|Version|Signature ID|some name|Severity| name=value" }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ' # single trailing space - valid
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ' # multiple trailing spaces - invalid
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| " }'
execute 'CEF:0|Vendor'
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor", "unparsed-data": "CEF:0|Vendor" }'
execute 'CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \= value cc=field 3'
assert_output_json_eq '{ "originalmsg": "CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \\= value cc=field 3", "unparsed-data": "CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \\= value cc=field 3" }'
execute ''
assert_output_json_eq '{ "originalmsg": "", "unparsed-data": "" }'
# finally, a use case from practice
execute 'CEF:0|ArcSight|ArcSight|10.0.0.15.0|rule:101|FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt|High| eventId=24934046519 type=2 mrt=8888882444085 sessionId=0 generatorID=34rSQWFOOOCAVlswcKFkbA\=\= categorySignificance=/Normal categoryBehavior=/Execute/Query categoryDeviceGroup=/Application categoryOutcome=/Success categoryObject=/Host/Application modelConfidence=0 severity=0 relevance=10 assetCriticality=0 priority=2 art=1427882454263 cat=/Detection/FOO/UNIX/Direct Root Connection Attempt deviceSeverity=Warning rt=1427881661000 shost=server.foo.bar src=10.0.0.1 sourceZoneID=MRL4p30sFOOO8panjcQnFbw\=\= sourceZoneURI=/All Zones/FOO Solutions/Server Subnet/UK/PAR-WDC-12-CELL5-PROD S2U 1 10.0.0.1-10.0.0.1 sourceGeoCountryCode=GB sourceGeoLocationInfo=London slong=-0.90843 slat=51.9039 dhost=server.foo.bar dst=10.0.0.1 destinationZoneID=McFOOO0sBABCUHR83pKJmQA\=\= destinationZoneURI=/All Zones/FOO Solutions/Prod/AMERICAS/FOO 10.0.0.1-10.0.0.1 duser=johndoe destinationGeoCountryCode=US destinationGeoLocationInfo=Jersey City dlong=-90.0435 dlat=30.732 fname=FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt filePath=/All Rules/Real-time Rules/ACBP-ACCESS CONTROL and AUTHORIZATION/FOO/Unix Server/FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt fileType=Rule ruleThreadId=NQVtdFOOABDrKsmLWpyq8g\=\= cs2= flexString2=DC0001-988 locality=1 cs2Label=Configuration Resource ahost=foo.bar agt=10.0.0.1 av=10.0.0.12 atz=Europe/Berlin aid=34rSQWFOOOBCAVlswcKFkbA\=\= at=superagent_ng dvchost=server.foo.bar dvc=10.0.0.1 deviceZoneID=Mbb8pFOOODol1dBKdURJA\=\= deviceZoneURI=/All Zones/FOO Solutions/Prod/GERMANY/FOO US2 Class6 A 508 10.0.0.1-10.0.0.1 dtz=Europe/Berlin deviceFacility=Rules Engine eventAnnotationStageUpdateTime=1427882444192 eventAnnotationModificationTime=1427882444192 eventAnnotationAuditTrail=1,1427453188050,root,Queued,,,,\n eventAnnotationVersion=1 eventAnnotationFlags=0 eventAnnotationEndTime=1427881661000 eventAnnotationManagerReceiptTime=1427882444085 _cefVer=0.1 ad.arcSightEventPath=3VcygrkkBABCAYFOOLlU13A\=\= baseEventIds=24934003731"'
assert_output_json_eq '{ "f": { "DeviceVendor": "ArcSight", "DeviceProduct": "ArcSight", "DeviceVersion": "10.0.0.15.0", "SignatureID": "rule:101", "Name": "FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "Severity": "High", "Extensions": { "eventId": "24934046519", "type": "2", "mrt": "8888882444085", "sessionId": "0", "generatorID": "34rSQWFOOOCAVlswcKFkbA==", "categorySignificance": "\/Normal", "categoryBehavior": "\/Execute\/Query", "categoryDeviceGroup": "\/Application", "categoryOutcome": "\/Success", "categoryObject": "\/Host\/Application", "modelConfidence": "0", "severity": "0", "relevance": "10", "assetCriticality": "0", "priority": "2", "art": "1427882454263", "cat": "\/Detection\/FOO\/UNIX\/Direct Root Connection Attempt", "deviceSeverity": "Warning", "rt": "1427881661000", "shost": "server.foo.bar", "src": "10.0.0.1", "sourceZoneID": "MRL4p30sFOOO8panjcQnFbw==", "sourceZoneURI": "\/All Zones\/FOO Solutions\/Server Subnet\/UK\/PAR-WDC-12-CELL5-PROD S2U 1 10.0.0.1-10.0.0.1", "sourceGeoCountryCode": "GB", "sourceGeoLocationInfo": "London", "slong": "-0.90843", "slat": "51.9039", "dhost": "server.foo.bar", "dst": "10.0.0.1", "destinationZoneID": "McFOOO0sBABCUHR83pKJmQA==", "destinationZoneURI": "\/All Zones\/FOO Solutions\/Prod\/AMERICAS\/FOO 10.0.0.1-10.0.0.1", "duser": "johndoe", "destinationGeoCountryCode": "US", "destinationGeoLocationInfo": "Jersey City", "dlong": "-90.0435", "dlat": "30.732", "fname": "FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "filePath": "\/All Rules\/Real-time Rules\/ACBP-ACCESS CONTROL and AUTHORIZATION\/FOO\/Unix Server\/FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "fileType": "Rule", "ruleThreadId": "NQVtdFOOABDrKsmLWpyq8g==", "cs2": "", "flexString2": "DC0001-988", "locality": "1", "cs2Label": "Configuration Resource", "ahost": "foo.bar", "agt": "10.0.0.1", "av": "10.0.0.12", "atz": "Europe\/Berlin", "aid": "34rSQWFOOOBCAVlswcKFkbA==", "at": "superagent_ng", "dvchost": "server.foo.bar", "dvc": "10.0.0.1", "deviceZoneID": "Mbb8pFOOODol1dBKdURJA==", "deviceZoneURI": "\/All Zones\/FOO Solutions\/Prod\/GERMANY\/FOO US2 Class6 A 508 10.0.0.1-10.0.0.1", "dtz": "Europe\/Berlin", "deviceFacility": "Rules Engine", "eventAnnotationStageUpdateTime": "1427882444192", "eventAnnotationModificationTime": "1427882444192", "eventAnnotationAuditTrail": "1,1427453188050,root,Queued,,,,\n", "eventAnnotationVersion": "1", "eventAnnotationFlags": "0", "eventAnnotationEndTime": "1427881661000", "eventAnnotationManagerReceiptTime": "1427882444085", "_cefVer": "0.1", "ad.arcSightEventPath": "3VcygrkkBABCAYFOOLlU13A==", "baseEventIds": "24934003731\"" } } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cef.sh 0000755 0001750 0001750 00000023125 13370250152 014206 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "CEF parser"
add_rule 'version=2'
add_rule 'rule=:%f:cef%'
# fabricated tests to test specific functionality
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product\|1\|\\|Version|Signature ID|some name|Severity| aa=field1 bb=this is a name\=value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product|1|\\", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a name=value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \= value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a = value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity|'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=value'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "name": "value" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=val\nue' # embedded LF
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "name": "val\nue" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value' #invalid punctuation in extension
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value" }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\alue' #invalid escape in extension
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\\alue", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\\alue" }'
execute 'CEF:0|V\endor|Product|Version|Signature ID|some name|Severity| name=value' #invalid escape in header
assert_output_json_eq '{ "originalmsg": "CEF:0|V\\endor|Product|Version|Signature ID|some name|Severity| name=value", "unparsed-data": "CEF:0|V\\endor|Product|Version|Signature ID|some name|Severity| name=value" }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ' # single trailing space - valid
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ' # multiple trailing spaces - invalid
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| " }'
execute 'CEF:0|Vendor'
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor", "unparsed-data": "CEF:0|Vendor" }'
execute 'CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \= value cc=field 3'
assert_output_json_eq '{ "originalmsg": "CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \\= value cc=field 3", "unparsed-data": "CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \\= value cc=field 3" }'
execute ''
assert_output_json_eq '{ "originalmsg": "", "unparsed-data": "" }'
# finally, a use case from practice
execute 'CEF:0|ArcSight|ArcSight|10.0.0.15.0|rule:101|FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt|High| eventId=24934046519 type=2 mrt=8888882444085 sessionId=0 generatorID=34rSQWFOOOCAVlswcKFkbA\=\= categorySignificance=/Normal categoryBehavior=/Execute/Query categoryDeviceGroup=/Application categoryOutcome=/Success categoryObject=/Host/Application modelConfidence=0 severity=0 relevance=10 assetCriticality=0 priority=2 art=1427882454263 cat=/Detection/FOO/UNIX/Direct Root Connection Attempt deviceSeverity=Warning rt=1427881661000 shost=server.foo.bar src=10.0.0.1 sourceZoneID=MRL4p30sFOOO8panjcQnFbw\=\= sourceZoneURI=/All Zones/FOO Solutions/Server Subnet/UK/PAR-WDC-12-CELL5-PROD S2U 1 10.0.0.1-10.0.0.1 sourceGeoCountryCode=GB sourceGeoLocationInfo=London slong=-0.90843 slat=51.9039 dhost=server.foo.bar dst=10.0.0.1 destinationZoneID=McFOOO0sBABCUHR83pKJmQA\=\= destinationZoneURI=/All Zones/FOO Solutions/Prod/AMERICAS/FOO 10.0.0.1-10.0.0.1 duser=johndoe destinationGeoCountryCode=US destinationGeoLocationInfo=Jersey City dlong=-90.0435 dlat=30.732 fname=FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt filePath=/All Rules/Real-time Rules/ACBP-ACCESS CONTROL and AUTHORIZATION/FOO/Unix Server/FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt fileType=Rule ruleThreadId=NQVtdFOOABDrKsmLWpyq8g\=\= cs2= flexString2=DC0001-988 locality=1 cs2Label=Configuration Resource ahost=foo.bar agt=10.0.0.1 av=10.0.0.12 atz=Europe/Berlin aid=34rSQWFOOOBCAVlswcKFkbA\=\= at=superagent_ng dvchost=server.foo.bar dvc=10.0.0.1 deviceZoneID=Mbb8pFOOODol1dBKdURJA\=\= deviceZoneURI=/All Zones/FOO Solutions/Prod/GERMANY/FOO US2 Class6 A 508 10.0.0.1-10.0.0.1 dtz=Europe/Berlin deviceFacility=Rules Engine eventAnnotationStageUpdateTime=1427882444192 eventAnnotationModificationTime=1427882444192 eventAnnotationAuditTrail=1,1427453188050,root,Queued,,,,\n eventAnnotationVersion=1 eventAnnotationFlags=0 eventAnnotationEndTime=1427881661000 eventAnnotationManagerReceiptTime=1427882444085 _cefVer=0.1 ad.arcSightEventPath=3VcygrkkBABCAYFOOLlU13A\=\= baseEventIds=24934003731"'
assert_output_json_eq '{ "f": { "DeviceVendor": "ArcSight", "DeviceProduct": "ArcSight", "DeviceVersion": "10.0.0.15.0", "SignatureID": "rule:101", "Name": "FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "Severity": "High", "Extensions": { "eventId": "24934046519", "type": "2", "mrt": "8888882444085", "sessionId": "0", "generatorID": "34rSQWFOOOCAVlswcKFkbA==", "categorySignificance": "\/Normal", "categoryBehavior": "\/Execute\/Query", "categoryDeviceGroup": "\/Application", "categoryOutcome": "\/Success", "categoryObject": "\/Host\/Application", "modelConfidence": "0", "severity": "0", "relevance": "10", "assetCriticality": "0", "priority": "2", "art": "1427882454263", "cat": "\/Detection\/FOO\/UNIX\/Direct Root Connection Attempt", "deviceSeverity": "Warning", "rt": "1427881661000", "shost": "server.foo.bar", "src": "10.0.0.1", "sourceZoneID": "MRL4p30sFOOO8panjcQnFbw==", "sourceZoneURI": "\/All Zones\/FOO Solutions\/Server Subnet\/UK\/PAR-WDC-12-CELL5-PROD S2U 1 10.0.0.1-10.0.0.1", "sourceGeoCountryCode": "GB", "sourceGeoLocationInfo": "London", "slong": "-0.90843", "slat": "51.9039", "dhost": "server.foo.bar", "dst": "10.0.0.1", "destinationZoneID": "McFOOO0sBABCUHR83pKJmQA==", "destinationZoneURI": "\/All Zones\/FOO Solutions\/Prod\/AMERICAS\/FOO 10.0.0.1-10.0.0.1", "duser": "johndoe", "destinationGeoCountryCode": "US", "destinationGeoLocationInfo": "Jersey City", "dlong": "-90.0435", "dlat": "30.732", "fname": "FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "filePath": "\/All Rules\/Real-time Rules\/ACBP-ACCESS CONTROL and AUTHORIZATION\/FOO\/Unix Server\/FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "fileType": "Rule", "ruleThreadId": "NQVtdFOOABDrKsmLWpyq8g==", "cs2": "", "flexString2": "DC0001-988", "locality": "1", "cs2Label": "Configuration Resource", "ahost": "foo.bar", "agt": "10.0.0.1", "av": "10.0.0.12", "atz": "Europe\/Berlin", "aid": "34rSQWFOOOBCAVlswcKFkbA==", "at": "superagent_ng", "dvchost": "server.foo.bar", "dvc": "10.0.0.1", "deviceZoneID": "Mbb8pFOOODol1dBKdURJA==", "deviceZoneURI": "\/All Zones\/FOO Solutions\/Prod\/GERMANY\/FOO US2 Class6 A 508 10.0.0.1-10.0.0.1", "dtz": "Europe\/Berlin", "deviceFacility": "Rules Engine", "eventAnnotationStageUpdateTime": "1427882444192", "eventAnnotationModificationTime": "1427882444192", "eventAnnotationAuditTrail": "1,1427453188050,root,Queued,,,,\n", "eventAnnotationVersion": "1", "eventAnnotationFlags": "0", "eventAnnotationEndTime": "1427881661000", "eventAnnotationManagerReceiptTime": "1427882444085", "_cefVer": "0.1", "ad.arcSightEventPath": "3VcygrkkBABCAYFOOLlU13A==", "baseEventIds": "24934003731\"" } } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_string.sh 0000755 0001750 0001750 00000002403 13370250152 014753 0000000 0000000 #!/bin/bash
# added 2015-09-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "string syntax"
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string% b'
execute 'a test b'
assert_output_json_eq '{"f": "test"}'
execute 'a "test" b'
assert_output_json_eq '{"f": "test"}'
execute 'a "test with space" b'
assert_output_json_eq '{"f": "test with space"}'
execute 'a "test with "" double escape" b'
assert_output_json_eq '{ "f": "test with \" double escape" }'
execute 'a "test with \" backslash escape" b'
assert_output_json_eq '{ "f": "test with \" backslash escape" }'
echo test quoting.mode
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"quoting.mode":"none"}% b'
execute 'a test b'
assert_output_json_eq '{"f": "test"}'
execute 'a "test" b'
assert_output_json_eq '{"f": "\"test\""}'
echo "test quoting.char.*"
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"quoting.char.begin":"[", "quoting.char.end":"]"}% b'
execute 'a test b'
assert_output_json_eq '{"f": "test"}'
execute 'a [test] b'
assert_output_json_eq '{"f": "test"}'
execute 'a [test test2] b'
assert_output_json_eq '{"f": "test test2"}'
# things that need to NOT match
cleanup_tmp_files
liblognorm-2.0.6/tests/string_rb_simple_2_lines.sh 0000755 0001750 0001750 00000001175 13370250152 017264 0000000 0000000 #!/bin/bash
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple rulebase via string"
execute_with_string 'rule=:%w:word%
rule=:%n:number%' 'test'
assert_output_json_eq '{ "w": "test" }'
execute_with_string 'rule=:%w:word%
rule=:%n:number%' '2'
assert_output_json_eq '{ "n": "2" }'
#This is a correct word...
execute_with_string 'rule=:%w:word%
rule=:%n:number%' '2.3'
assert_output_json_eq '{ "w": "2.3" }'
#check error case
execute_with_string 'rule=:%w:word%
rule=:%n:number%' '2 3'
assert_output_json_eq '{ "originalmsg": "2 3", "unparsed-data": " 3" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/seq_simple.sh 0000755 0001750 0001750 00000000677 13370250152 014456 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple sequence (array) syntax"
add_rule 'version=2'
add_rule 'rule=:a %[{"name":"num", "type":"number"}, {"name":"-", "type":"literal", "text":":"}, {"name":"hex", "type":"hexnumber"}]% b'
execute 'a 4711:0x4712 b'
assert_output_json_eq '{ "hex": "0x4712", "num": "4711" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_mac48.sh 0000755 0001750 0001750 00000001311 13370250152 014356 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "dmac48 syntax"
reset_rules
add_rule 'version=2'
add_rule 'rule=:%field:mac48%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"field": "f0-f6-1c-5f-cc-a2"}'
# things that need to NOT match
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "originalmsg": "f0-f6:1c:5f:cc-a2", "unparsed-data": "f0-f6:1c:5f:cc-a2" }'
execute 'f0:f6:1c:xf:cc:a2'
assert_output_json_eq '{ "originalmsg": "f0:f6:1c:xf:cc:a2", "unparsed-data": "f0:f6:1c:xf:cc:a2" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_tokenized_with_invalid_ruledef.sh 0000755 0001750 0001750 00000002612 13370250152 021712 0000000 0000000 #!/bin/bash
# added 2014-11-18 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "tokenized field with invalid rule definition"
add_rule 'rule=:%arr:tokenized%'
execute '123 abc 456 def'
assert_output_contains '"unparsed-data": "123 abc 456 def"'
assert_output_contains '"originalmsg": "123 abc 456 def"'
reset_rules
add_rule 'rule=:%arr:tokenized: %'
execute '123 abc 456 def'
assert_output_contains '"unparsed-data": "123 abc 456 def"'
assert_output_contains '"originalmsg": "123 abc 456 def"'
reset_rules
add_rule 'rule=:%arr:tokenized:quux:%'
execute '123 abc 456 def'
assert_output_contains '"unparsed-data": "123 abc 456 def"'
assert_output_contains '"originalmsg": "123 abc 456 def"'
reset_rules
add_rule 'rule=:%arr:tokenized:quux:some_non_existant_type%'
execute '123 abc 456 def'
assert_output_contains '"unparsed-data": "123 abc 456 def"'
assert_output_contains '"originalmsg": "123 abc 456 def"'
reset_rules
add_rule 'rule=:%arr:tokenized:quux:some_non_existant_type:%'
execute '123 abc 456 def'
assert_output_contains '"unparsed-data": "123 abc 456 def"'
assert_output_contains '"originalmsg": "123 abc 456 def"'
reset_rules
add_rule 'rule=:%arr:tokenized::::%%%%'
execute '123 abc 456 def'
assert_output_contains '"unparsed-data": "123 abc 456 def"'
assert_output_contains '"originalmsg": "123 abc 456 def"'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_hexnumber_jsoncnf.sh 0000755 0001750 0001750 00000001116 13370250152 017162 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "hexnumber field"
add_rule 'version=2'
add_rule 'rule=:here is a number %{"name":"num", "type":"hexnumber"}% in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_checkpoint-lea.sh 0000755 0001750 0001750 00000000600 13370250152 016330 0000000 0000000 #!/bin/bash
# added 2015-06-18 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "Checkpoint LEA parser"
add_rule 'version=2'
add_rule 'rule=:%f:checkpoint-lea%'
execute 'tcp_flags: RST-ACK; src: 192.168.0.1;'
assert_output_json_eq '{ "f": { "tcp_flags": "RST-ACK", "src": "192.168.0.1" } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_float_jsoncnf.sh 0000755 0001750 0001750 00000001437 13370250152 016300 0000000 0000000 #!/bin/bash
# added 2015-02-25 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "float field"
add_rule 'version=2'
add_rule 'rule=:here is a number %{"name":"num", "type":"float"}% in floating pt form'
execute 'here is a number 15.9 in floating pt form'
assert_output_json_eq '{"num": "15.9"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:here is a negative number %{"name":"num", "type":"float"}% for you'
execute 'here is a negative number -4.2 for you'
assert_output_json_eq '{"num": "-4.2"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:here is another real number %{"name":"real_no", "type":"float"}%.'
execute 'here is another real number 2.71.'
assert_output_json_eq '{"real_no": "2.71"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/runaway_rule.sh 0000755 0001750 0001750 00000001173 13370250152 015022 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
# Note that this test produces an error message, as it encouters the
# runaway rule. This is OK and actually must happen. The prime point
# of the test is that it correctly loads the second rule, which
# would otherwise be consumed by the runaway rule.
. $srcdir/exec.sh
test_def $0 "runaway rule (unmatched percent signs)"
reset_rules
add_rule 'version=2'
add_rule 'rule=:test %f1:word unmatched percent'
add_rule 'rule=:%field:word%'
execute 'data'
assert_output_json_eq '{"field": "data"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_float.sh 0000755 0001750 0001750 00000001340 13370250152 014551 0000000 0000000 #!/bin/bash
# added 2015-02-25 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "float field"
add_rule 'version=2'
add_rule 'rule=:here is a number %num:float% in floating pt form'
execute 'here is a number 15.9 in floating pt form'
assert_output_json_eq '{"num": "15.9"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:here is a negative number %num:float% for you'
execute 'here is a negative number -4.2 for you'
assert_output_json_eq '{"num": "-4.2"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:here is another real number %real_no:float%.'
execute 'here is another real number 2.71.'
assert_output_json_eq '{"real_no": "2.71"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_mac48_v1.sh 0000755 0001750 0001750 00000001250 13370250152 014766 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "dmac48 syntax"
add_rule 'rule=:%field:mac48%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"field": "f0-f6-1c-5f-cc-a2"}'
# things that need to NOT match
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "originalmsg": "f0-f6:1c:5f:cc-a2", "unparsed-data": "f0-f6:1c:5f:cc-a2" }'
execute 'f0:f6:1c:xf:cc:a2'
assert_output_json_eq '{ "originalmsg": "f0:f6:1c:xf:cc:a2", "unparsed-data": "f0:f6:1c:xf:cc:a2" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_v2-iptables_v1.sh 0000755 0001750 0001750 00000004745 13370250152 016216 0000000 0000000 #!/bin/bash
# added 2015-04-30 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "v2-iptables field"
add_rule 'rule=:iptables output denied: %field:v2-iptables%'
# first, a real-world case
execute 'iptables output denied: IN= OUT=eth0 SRC=176.9.56.141 DST=168.192.14.3 LEN=32 TOS=0x00 PREC=0x00 TTL=64 ID=39110 DF PROTO=UDP SPT=49564 DPT=2010 LEN=12'
assert_output_json_eq '{ "field": { "IN": "", "OUT": "eth0", "SRC": "176.9.56.141", "DST": "168.192.14.3", "LEN": "12", "TOS": "0x00", "PREC": "0x00", "TTL": "64", "ID": "39110", "DF": null, "PROTO": "UDP", "SPT": "49564", "DPT": "2010" } }'
# now some more "fabricated" cases for better readable test
reset_rules
add_rule 'rule=:iptables: %field:v2-iptables%'
execute 'iptables: IN=value SECOND=test'
assert_output_json_eq '{ "field": { "IN": "value", "SECOND": "test" }} }'
execute 'iptables: IN= SECOND=test'
assert_output_json_eq '{ "field": { "IN": ""} }'
execute 'iptables: IN SECOND=test'
assert_output_json_eq '{ "field": { "IN": null} }'
execute 'iptables: IN=invalue OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": "invalue", "OUT": "outvalue" } }'
execute 'iptables: IN= OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": "", "OUT": "outvalue" } }'
execute 'iptables: IN OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": null, "OUT": "outvalue" } }'
#
#check cases where parsing failure must occur
#
echo verify failure cases
# lower case is not permitted
execute 'iptables: in=value'
assert_output_json_eq '{ "originalmsg": "iptables: in=value", "unparsed-data": "in=value" }'
execute 'iptables: in='
assert_output_json_eq '{ "originalmsg": "iptables: in=", "unparsed-data": "in=" }'
execute 'iptables: in'
assert_output_json_eq '{ "originalmsg": "iptables: in", "unparsed-data": "in" }'
execute 'iptables: IN' # single field is NOT permitted!
assert_output_json_eq '{ "originalmsg": "iptables: IN", "unparsed-data": "IN" }'
# multiple spaces between n=v pairs are not permitted
execute 'iptables: IN=invalue OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN=invalue OUT=outvalue", "unparsed-data": "IN=invalue OUT=outvalue" }'
execute 'iptables: IN= OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN= OUT=outvalue", "unparsed-data": "IN= OUT=outvalue" }'
execute 'iptables: IN OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN OUT=outvalue", "unparsed-data": "IN OUT=outvalue" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/strict_prefix_matching_1.sh 0000755 0001750 0001750 00000000733 13370250152 017265 0000000 0000000 #!/bin/bash
# added 2015-11-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "one rule is strict prefix of a longer one"
add_rule 'version=2'
add_rule 'rule=:a word %w1:word%'
add_rule 'rule=:a word %w1:word% another word %w2:word%'
execute 'a word w1 another word w2'
assert_output_json_eq '{ "w2": "w2", "w1": "w1" }'
execute 'a word w1'
assert_output_json_eq '{ "w1": "w1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_string_perm_chars.sh 0000755 0001750 0001750 00000003540 13370250152 017161 0000000 0000000 #!/bin/bash
# added 2015-09-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "string type with permitted chars"
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"matching.permitted":"abc"}% b'
execute 'a abc b'
assert_output_json_eq '{"f": "abc"}'
execute 'a abcd b'
assert_output_json_eq '{"originalmsg": "a abcd b", "unparsed-data": "abcd b" }'
execute 'a abbbbbcccbaaaa b'
assert_output_json_eq '{"f": "abbbbbcccbaaaa"}'
execute 'a "abc" b'
assert_output_json_eq '{"f": "abc"}'
echo "param array"
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"matching.permitted":[
{"chars":"ab"},
{"chars":"c"}
]}% b'
execute 'a abc b'
assert_output_json_eq '{"f": "abc"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"matching.permitted":[
{"class":"digit"},
{"chars":"x"}
]}% b'
execute 'a 12x3 b'
assert_output_json_eq '{"f": "12x3"}'
echo alpha
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"matching.permitted":[
{"class":"alpha"}
]}% b'
execute 'a abcdefghijklmnopqrstuvwxyZ b'
assert_output_json_eq '{"f": "abcdefghijklmnopqrstuvwxyZ"}'
execute 'a abcd1 b'
assert_output_json_eq '{"originalmsg": "a abcd1 b", "unparsed-data": "abcd1 b" }'
echo alnum
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %f:string{"matching.permitted":[
{"class":"alnum"}
]}% b'
execute 'a abcdefghijklmnopqrstuvwxyZ b'
assert_output_json_eq '{"f": "abcdefghijklmnopqrstuvwxyZ"}'
execute 'a abcd1 b'
assert_output_json_eq '{"f": "abcd1" }'
execute 'a abcd1_ b'
assert_output_json_eq '{ "originalmsg": "a abcd1_ b", "unparsed-data": "abcd1_ b" } '
cleanup_tmp_files
liblognorm-2.0.6/tests/include.sh 0000755 0001750 0001750 00000000766 13370250152 013737 0000000 0000000 #!/bin/bash
# added 2015-08-28 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "include (success case)"
reset_rules
add_rule 'version=2'
add_rule 'include=inc.rulebase'
reset_rules inc
add_rule 'version=2' inc
add_rule 'rule=:%field:mac48%' inc
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
# single test is sufficient, because that only works if the include
# worked ;)
cleanup_tmp_files
liblognorm-2.0.6/tests/field_tokenized_recursive.sh 0000755 0001750 0001750 00000005143 13370250152 017534 0000000 0000000 #!/bin/bash
# added 2014-12-08 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "tokenized field with recursive field matching tokens"
#recursive field inside tokenized field with default tail field
add_rule 'rule=:%subnet_addr:ipv4%/%subnet_mask:number%%tail:rest%'
add_rule 'rule=:%ip_addr:ipv4%%tail:rest%'
add_rule 'rule=:blocked inbound via: %via_ip:ipv4% from: %addresses:tokenized:, :recursive% to %server_ip:ipv4%'
execute 'blocked inbound via: 192.168.1.1 from: 1.2.3.4, 5.6.16.0/12, 8.9.10.11, 12.13.14.15, 16.17.18.0/8, 19.20.21.24/3 to 192.168.1.5'
assert_output_json_eq '{
"addresses": [
{"ip_addr": "1.2.3.4"},
{"subnet_addr": "5.6.16.0", "subnet_mask": "12"},
{"ip_addr": "8.9.10.11"},
{"ip_addr": "12.13.14.15"},
{"subnet_addr": "16.17.18.0", "subnet_mask": "8"},
{"subnet_addr": "19.20.21.24", "subnet_mask": "3"}],
"server_ip": "192.168.1.5",
"via_ip": "192.168.1.1"}'
reset_rules
#recursive field inside tokenized field with default tail field
reset_rules
add_rule 'rule=:%subnet_addr:ipv4%/%subnet_mask:number%%remains:rest%'
add_rule 'rule=:%ip_addr:ipv4%%remains:rest%'
add_rule 'rule=:blocked inbound via: %via_ip:ipv4% from: %addresses:tokenized:, :recursive:remains% to %server_ip:ipv4%'
execute 'blocked inbound via: 192.168.1.1 from: 1.2.3.4, 5.6.16.0/12, 8.9.10.11, 12.13.14.15, 16.17.18.0/8, 19.20.21.24/3 to 192.168.1.5'
assert_output_json_eq '{
"addresses": [
{"ip_addr": "1.2.3.4"},
{"subnet_addr": "5.6.16.0", "subnet_mask": "12"},
{"ip_addr": "8.9.10.11"},
{"ip_addr": "12.13.14.15"},
{"subnet_addr": "16.17.18.0", "subnet_mask": "8"},
{"subnet_addr": "19.20.21.24", "subnet_mask": "3"}],
"server_ip": "192.168.1.5",
"via_ip": "192.168.1.1"}'
#recursive field inside tokenized field with default tail field
reset_rules 'net'
add_rule 'rule=:%subnet_addr:ipv4%/%subnet_mask:number%%remains:rest%' 'net'
add_rule 'rule=:%ip_addr:ipv4%%remains:rest%' 'net'
reset_rules
add_rule 'rule=:blocked inbound via: %via_ip:ipv4% from: %addresses:tokenized:, :descent:./net.rulebase:remains% to %server_ip:ipv4%'
execute 'blocked inbound via: 192.168.1.1 from: 1.2.3.4, 5.6.16.0/12, 8.9.10.11, 12.13.14.15, 16.17.18.0/8, 19.20.21.24/3 to 192.168.1.5'
assert_output_json_eq '{
"addresses": [
{"ip_addr": "1.2.3.4"},
{"subnet_addr": "5.6.16.0", "subnet_mask": "12"},
{"ip_addr": "8.9.10.11"},
{"ip_addr": "12.13.14.15"},
{"subnet_addr": "16.17.18.0", "subnet_mask": "8"},
{"subnet_addr": "19.20.21.24", "subnet_mask": "3"}],
"server_ip": "192.168.1.5",
"via_ip": "192.168.1.1"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/json_eq.c 0000644 0001750 0001750 00000005343 13370246323 013560 0000000 0000000 #include "config.h"
#include
#include
#include
#include
#include
#include "internal.h"
typedef struct json_object obj;
static int eq(obj* expected, obj* actual);
static int obj_eq(obj* expected, obj* actual) {
int eql = 1;
struct json_object_iterator it = json_object_iter_begin(expected);
struct json_object_iterator itEnd = json_object_iter_end(expected);
while (!json_object_iter_equal(&it, &itEnd)) {
obj *actual_val;
json_object_object_get_ex(actual, json_object_iter_peek_name(&it), &actual_val);
eql &= eq(json_object_iter_peek_value(&it), actual_val);
json_object_iter_next(&it);
}
return eql;
}
static int arr_eq(obj* expected, obj* actual) {
int eql = 1;
int expected_len = json_object_array_length(expected);
int actual_len = json_object_array_length(actual);
if (expected_len != actual_len) return 0;
for (int i = 0; i < expected_len; i++) {
obj* exp = json_object_array_get_idx(expected, i);
obj* act = json_object_array_get_idx(actual, i);
eql &= eq(exp, act);
}
return eql;
}
static int str_eq(obj* expected, obj* actual) {
const char* exp_str = json_object_to_json_string(expected);
const char* act_str = json_object_to_json_string(actual);
return strcmp(exp_str, act_str) == 0;
}
static int eq(obj* expected, obj* actual) {
if (expected == NULL && actual == NULL) {
return 1;
} else if (expected == NULL) {
return 0;
} else if (actual == NULL) {
return 0;
}
enum json_type expected_type = json_object_get_type(expected);
enum json_type actual_type = json_object_get_type(actual);
if (expected_type != actual_type) return 0;
switch(expected_type) {
case json_type_null:
return 1;
case json_type_boolean:
return json_object_get_boolean(expected) == json_object_get_boolean(actual);
case json_type_double:
return (fabs(json_object_get_double(expected) - json_object_get_double(actual)) < 0.001);
case json_type_int:
return json_object_get_int64(expected) == json_object_get_int64(actual);
case json_type_object:
return obj_eq(expected, actual);
case json_type_array:
return arr_eq(expected, actual);
case json_type_string:
return str_eq(expected, actual);
default:
fprintf(stderr, "unexpected type in %s:%d\n", __FILE__, __LINE__);
abort();
}
return 0;
}
int main(int argc, char** argv) {
if (argc != 3) {
fprintf(stderr, "expected and actual json not given, number of args was: %d\n", argc);
exit(100);
}
obj* expected = json_tokener_parse(argv[1]);
obj* actual = json_tokener_parse(argv[2]);
int result = eq(expected, actual) ? 0 : 1;
json_object_put(expected);
json_object_put(actual);
if (result != 0) {
printf("JSONs weren't equal. \n\tExpected: \n\t\t%s\n\tActual: \n\t\t%s\n", argv[1], argv[2]);
}
return result;
}
liblognorm-2.0.6/tests/Makefile.am 0000644 0001750 0001750 00000011312 13370250152 013776 0000000 0000000 check_PROGRAMS = json_eq
# re-enable if we really need the c program check check_PROGRAMS = json_eq user_test
json_eq_self_sources = json_eq.c
json_eq_SOURCES = $(json_eq_self_sources)
json_eq_CPPFLAGS = $(JSON_C_CFLAGS) $(WARN_CFLAGS) -I$(top_srcdir)/src
json_eq_LDADD = $(JSON_C_LIBS) -lm -lestr
json_eq_LDFLAGS = -no-install
#user_test_SOURCES = user_test.c
#user_test_CPPFLAGS = $(LIBLOGNORM_CFLAGS) $(JSON_C_CFLAGS) $(LIBESTR_CFLAGS)
#user_test_LDADD = $(JSON_C_LIBS) $(LIBLOGNORM_LIBS) $(LIBESTR_LIBS) ../compat/compat.la
#user_test_LDFLAGS = -no-install
# The following tests are for the new pdag-based engine (v2+).
#
# There are some notes due:
#
# removed field_float_with_invalid_ruledef.sh because test is not valid.
# more info: https://github.com/rsyslog/liblognorm/issues/98
# note that probably the other currently disable *_invalid_*.sh
# tests are also affected.
#
# there seems to be a problem with some format in cisco-interface-spec
# Probably this was just not seen in v1, because of some impreciseness
# in the ptree normalizer. Pushing equivalent v2 test back until v2
# implementation is further developed.
TESTS_SHELLSCRIPTS = \
usrdef_simple.sh \
usrdef_two.sh \
usrdef_twotypes.sh \
usrdef_actual1.sh \
usrdef_ipaddr.sh \
usrdef_ipaddr_dotdot.sh \
usrdef_ipaddr_dotdot2.sh \
usrdef_ipaddr_dotdot3.sh \
usrdef_nested_segfault.sh \
missing_line_ending.sh \
lognormalizer-invld-call.sh \
string_rb_simple.sh \
string_rb_simple_2_lines.sh \
names.sh \
literal.sh \
include.sh \
include_RULEBASES.sh \
seq_simple.sh \
runaway_rule.sh \
runaway_rule_comment.sh \
annotate.sh \
alternative_simple.sh \
alternative_three.sh \
alternative_nested.sh \
alternative_segfault.sh \
repeat_very_simple.sh \
repeat_simple.sh \
repeat_mismatch_in_while.sh \
repeat_while_alternative.sh \
repeat_alternative_nested.sh \
parser_prios.sh \
parser_whitespace.sh \
parser_whitespace_jsoncnf.sh \
parser_LF.sh \
parser_LF_jsoncnf.sh \
strict_prefix_actual_sample1.sh \
strict_prefix_matching_1.sh \
strict_prefix_matching_2.sh \
field_string.sh \
field_string_perm_chars.sh \
field_string_lazy_matching.sh \
field_string_doc_sample_lazy.sh \
field_number.sh \
field_number-fmt_number.sh \
field_number_maxval.sh \
field_hexnumber.sh \
field_hexnumber-fmt_number.sh \
field_hexnumber_jsoncnf.sh \
field_hexnumber_range.sh \
field_hexnumber_range_jsoncnf.sh \
rule_last_str_short.sh \
field_mac48.sh \
field_mac48_jsoncnf.sh \
field_name_value.sh \
field_name_value_jsoncnf.sh \
field_kernel_timestamp.sh \
field_kernel_timestamp_jsoncnf.sh \
field_whitespace.sh \
rule_last_str_long.sh \
field_whitespace_jsoncnf.sh \
field_rest.sh \
field_rest_jsoncnf.sh \
field_json.sh \
field_json_jsoncnf.sh \
field_cee-syslog.sh \
field_cee-syslog_jsoncnf.sh \
field_ipv6.sh \
field_ipv6_jsoncnf.sh \
field_v2-iptables.sh \
field_v2-iptables_jsoncnf.sh \
field_cef.sh \
field_cef_jsoncnf.sh \
field_checkpoint-lea.sh \
field_checkpoint-lea_jsoncnf.sh \
field_checkpoint-lea-terminator.sh \
field_duration.sh \
field_duration_jsoncnf.sh \
field_float.sh \
field_float-fmt_number.sh \
field_float_jsoncnf.sh \
field_rfc5424timestamp-fmt_timestamp-unix.sh \
field_rfc5424timestamp-fmt_timestamp-unix-ms.sh \
very_long_logline_jsoncnf.sh
# now come tests for the legacy (v1) engine
TESTS_SHELLSCRIPTS += \
missing_line_ending_v1.sh \
runaway_rule_v1.sh \
runaway_rule_comment_v1.sh \
field_hexnumber_v1.sh \
field_mac48_v1.sh \
field_name_value_v1.sh \
field_kernel_timestamp_v1.sh \
field_whitespace_v1.sh \
field_rest_v1.sh \
field_json_v1.sh \
field_cee-syslog_v1.sh \
field_ipv6_v1.sh \
field_v2-iptables_v1.sh \
field_cef_v1.sh \
field_checkpoint-lea_v1.sh \
field_duration_v1.sh \
field_float_v1.sh \
field_cee-syslog_v1.sh \
field_tokenized.sh \
field_tokenized_with_invalid_ruledef.sh \
field_recursive.sh \
field_tokenized_recursive.sh \
field_interpret.sh \
field_interpret_with_invalid_ruledef.sh \
field_descent.sh \
field_descent_with_invalid_ruledef.sh \
field_suffixed.sh \
field_suffixed_with_invalid_ruledef.sh \
field_cisco-interface-spec.sh \
field_cisco-interface-spec-at-EOL.sh \
field_float_with_invalid_ruledef.sh \
very_long_logline.sh
#re-add to TESTS if needed: user_test
TESTS = \
$(TESTS_SHELLSCRIPTS)
REGEXP_TESTS = \
field_regex_default_group_parse_and_return.sh \
field_regex_invalid_args.sh \
field_regex_with_consume_group.sh \
field_regex_with_consume_group_and_return_group.sh \
field_regex_with_negation.sh \
field_tokenized_with_regex.sh \
field_regex_while_regex_support_is_disabled.sh
EXTRA_DIST = exec.sh \
$(TESTS_SHELLSCRIPTS) \
$(REGEXP_TESTS) \
$(json_eq_self_sources) \
$(user_test_SOURCES)
if ENABLE_REGEXP
TESTS += $(REGEXP_TESTS)
endif
liblognorm-2.0.6/tests/string_rb_simple.sh 0000755 0001750 0001750 00000000366 13370250152 015652 0000000 0000000 #!/bin/bash
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple rulebase via string"
execute_with_string 'rule=:%w:word%' 'test'
assert_output_json_eq '{ "w": "test" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/annotate.sh 0000755 0001750 0001750 00000001575 13370250152 014124 0000000 0000000 #!/bin/bash
# added 2016-11-08 by Rainer Gerhards
. $srcdir/exec.sh
test_def $0 "annotate functionality"
reset_rules
add_rule 'version=2'
add_rule 'rule=ABC,WIN:<%-:number%>1 %-:date-rfc5424% %-:word% %tag:word% - - -'
add_rule 'rule=ABC:<%-:number%>1 %-:date-rfc5424% %-:word% %tag:word% + - -'
add_rule 'rule=WIN:<%-:number%>1 %-:date-rfc5424% %-:word% %tag:word% . - -'
add_rule 'annotate=WIN:+annot1="WIN" # inline-comment'
add_rule 'annotate=ABC:+annot2="ABC"'
execute '<37>1 2016-11-03T23:59:59+03:00 server.example.net TAG . - -'
assert_output_json_eq '{ "tag": "TAG", "annot1": "WIN" }'
execute '<37>1 2016-11-03T23:59:59+03:00 server.example.net TAG + - -'
assert_output_json_eq '{ "tag": "TAG", "annot2": "ABC" }'
execute '<6>1 2016-09-02T07:41:07+02:00 server.example.net TAG - - -'
assert_output_json_eq '{ "tag": "TAG", "annot1": "WIN", "annot2": "ABC" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_kernel_timestamp_jsoncnf.sh 0000755 0001750 0001750 00000003436 13370250152 020537 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "kernel timestamp parser"
add_rule 'version=2'
add_rule 'rule=:begin %{"name":"timestamp", "type":"kernel-timestamp"}% end'
execute 'begin [12345.123456] end'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:begin %{"name":"timestamp", "type":"kernel-timestamp"}%'
execute 'begin [12345.123456]'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:%{"name":"timestamp", "type":"kernel-timestamp"}%'
execute '[12345.123456]'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
execute '[154469.133028]'
assert_output_json_eq '{ "timestamp": "[154469.133028]"}'
execute '[123456789012.123456]'
assert_output_json_eq '{ "timestamp": "[123456789012.123456]"}'
#check cases where parsing failure must occur
execute '[1234.123456]'
assert_output_json_eq '{"originalmsg": "[1234.123456]", "unparsed-data": "[1234.123456]" }'
execute '[1234567890123.123456]'
assert_output_json_eq '{"originalmsg": "[1234567890123.123456]", "unparsed-data": "[1234567890123.123456]" }'
execute '[123456789012.12345]'
assert_output_json_eq '{ "originalmsg": "[123456789012.12345]", "unparsed-data": "[123456789012.12345]" }'
execute '[123456789012.1234567]'
assert_output_json_eq '{ "originalmsg": "[123456789012.1234567]", "unparsed-data": "[123456789012.1234567]" }'
execute '(123456789012.123456]'
assert_output_json_eq '{ "originalmsg": "(123456789012.123456]", "unparsed-data": "(123456789012.123456]" }'
execute '[123456789012.123456'
assert_output_json_eq '{ "originalmsg": "[123456789012.123456", "unparsed-data": "[123456789012.123456" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/alternative_simple.sh 0000755 0001750 0001750 00000000730 13370250152 016172 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple alternative syntax"
add_rule 'version=2'
add_rule 'rule=:a %{"type":"alternative", "parser":[{"name":"num", "type":"number"}, {"name":"hex", "type":"hexnumber"}]}% b'
execute 'a 4711 b'
assert_output_json_eq '{ "num": "4711" }'
execute 'a 0x4711 b'
assert_output_json_eq '{ "hex": "0x4711" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/strict_prefix_matching_2.sh 0000755 0001750 0001750 00000001126 13370250152 017263 0000000 0000000 #!/bin/bash
# added 2015-11-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "named literal compaction"
add_rule 'version=2'
add_rule 'rule=:a word %w1:word% %l1:literal{"text":"l"}% b'
add_rule 'rule=:a word %w1:word% %l2:literal{"text":"l2"}% b'
add_rule 'rule=:a word %w1:word% l3 b'
execute 'a word w1 l b'
assert_output_json_eq '{ "l1": "l", "w1": "w1" }'
execute 'a word w1 l2 b'
assert_output_json_eq '{ "l2": "l2", "w1": "w1" }'
execute 'a word w1 l3 b'
assert_output_json_eq '{ "w1": "w1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_rfc5424timestamp-fmt_timestamp-unix.sh 0000755 0001750 0001750 00000001663 13370250152 022321 0000000 0000000 #!/bin/bash
# added 2017-10-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "RFC5424 timestamp in timestamp-unix format"
add_rule 'version=2'
add_rule 'rule=:here is a timestamp %{ "type":"date-rfc5424", "name":"num", "format":"timestamp-unix"}% in RFC5424 format'
execute 'here is a timestamp 2000-03-11T14:15:16+01:00 in RFC5424 format'
assert_output_json_eq '{"num": 952780516}'
# with milliseconds (must be ignored with this format!)
execute 'here is a timestamp 2000-03-11T14:15:16.321+01:00 in RFC5424 format'
assert_output_json_eq '{"num": 952780516}'
#check cases where parsing failure must occur
execute 'here is a timestamp 2000-03-11T14:15:16+01:00in RFC5424 format'
assert_output_json_eq '{ "originalmsg": "here is a timestamp 2000-03-11T14:15:16+01:00in RFC5424 format", "unparsed-data": "2000-03-11T14:15:16+01:00in RFC5424 format" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/missing_line_ending.sh 0000755 0001750 0001750 00000001325 13370250152 016310 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "missing line ending"
reset_rules
add_rule 'version=2'
add_rule_no_LF 'rule=:%field:mac48%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"field": "f0-f6-1c-5f-cc-a2"}'
# things that need to NOT match
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "originalmsg": "f0-f6:1c:5f:cc-a2", "unparsed-data": "f0-f6:1c:5f:cc-a2" }'
execute 'f0:f6:1c:xf:cc:a2'
assert_output_json_eq '{ "originalmsg": "f0:f6:1c:xf:cc:a2", "unparsed-data": "f0:f6:1c:xf:cc:a2" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_simple.sh 0000755 0001750 0001750 00000000732 13370250152 015146 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple user-defined type"
add_rule 'version=2'
add_rule 'type=@hex-byte:%f1:hexnumber{"maxval": "255"}%'
add_rule 'rule=:a word %w1:word% a byte % .:@hex-byte % another word %w2:word%'
execute 'a word w1 a byte 0xff another word w2'
assert_output_json_eq '{ "w2": "w2", "f1": "0xff", "w1": "w1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_suffixed.sh 0000755 0001750 0001750 00000005423 13370250152 015267 0000000 0000000 #!/bin/bash
# added 2015-02-25 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "field with one of many possible suffixes"
add_rule 'rule=:gc reclaimed %eden_free:suffixed:,:b,kb,mb,gb:number% eden [surviver: %surviver_used:suffixed:;:kb;mb;gb;b:number%/%surviver_size:suffixed:|:b|kb|mb|gb:float%]'
execute 'gc reclaimed 559mb eden [surviver: 95b/30.2mb]'
assert_output_json_eq '{"eden_free": {"value": "559", "suffix":"mb"}, "surviver_used": {"value": "95", "suffix": "b"}, "surviver_size": {"value": "30.2", "suffix": "mb"}}'
reset_rules
add_rule 'rule=:gc reclaimed %eden_free:named_suffixed:size:unit:,:b,kb,mb,gb:number% eden [surviver: %surviver_used:named_suffixed:sz:u:;:kb;mb;gb;b:number%/%surviver_size:suffixed:|:b|kb|mb|gb:float%]'
execute 'gc reclaimed 559mb eden [surviver: 95b/30.2mb]'
assert_output_json_eq '{"eden_free": {"size": "559", "unit":"mb"}, "surviver_used": {"sz": "95", "u": "b"}, "surviver_size": {"value": "30.2", "suffix": "mb"}}'
reset_rules
add_rule 'rule=:gc reclaimed %eden_free:named_suffixed:size:unit:,:b,kb,mb,gb:interpret:int:number% from eden'
execute 'gc reclaimed 559mb from eden'
assert_output_json_eq '{"eden_free": {"size": 559, "unit":"mb"}}'
reset_rules
add_rule 'rule=:disk free: %free:named_suffixed:size:unit:,:\x25,gb:interpret:int:number%'
execute 'disk free: 12%'
assert_output_json_eq '{"free": {"size": 12, "unit":"%"}}'
execute 'disk free: 130gb'
assert_output_json_eq '{"free": {"size": 130, "unit":"gb"}}'
reset_rules
add_rule 'rule=:disk free: %free:named_suffixed:size:unit:\x3a:gb\x3a\x25:interpret:int:number%'
execute 'disk free: 12%'
assert_output_json_eq '{"free": {"size": 12, "unit":"%"}}'
execute 'disk free: 130gb'
assert_output_json_eq '{"free": {"size": 130, "unit":"gb"}}'
reset_rules
add_rule 'rule=:eden,surviver,old-gen available-capacity: %available_memory:tokenized:,:named_suffixed:size:unit:,:mb,gb:interpret:int:number%'
execute 'eden,surviver,old-gen available-capacity: 400mb,40mb,1gb'
assert_output_json_eq '{"available_memory": [{"size": 400, "unit":"mb"}, {"size": 40, "unit":"mb"}, {"size": 1, "unit":"gb"}]}'
reset_rules
add_rule 'rule=:eden,surviver,old-gen available-capacity: %available_memory:named_suffixed:size:unit:,:mb,gb:tokenized:,:interpret:int:number%'
execute 'eden,surviver,old-gen available-capacity: 400,40,1024mb'
assert_output_json_eq '{"available_memory": {"size": [400, 40, 1024], "unit":"mb"}}'
reset_rules
add_rule 'rule=:eden:surviver:old-gen available-capacity: %available_memory:named_suffixed:size:unit:,:mb,gb:tokenized:\x3a:interpret:int:number%'
execute 'eden:surviver:old-gen available-capacity: 400:40:1024mb'
assert_output_json_eq '{"available_memory": {"size": [400, 40, 1024], "unit":"mb"}}'
cleanup_tmp_files
liblognorm-2.0.6/tests/alternative_nested.sh 0000755 0001750 0001750 00000001166 13370250152 016167 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple alternative syntax"
add_rule 'version=2'
add_rule 'rule=:a %
{"type":"alternative",
"parser": [
[
{"type":"number", "name":"num1"},
{"type":"literal", "text":":"},
{"type":"number", "name":"num"},
],
{"type":"hexnumber", "name":"hex"}
]
}% b'
execute 'a 47:11 b'
assert_output_json_eq '{"num": "11", "num1": "47" }'
execute 'a 0x4711 b'
assert_output_json_eq '{ "hex": "0x4711" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/runaway_rule_v1.sh 0000755 0001750 0001750 00000001161 13370250152 015425 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
# Note that this test produces an error message, as it encouters the
# runaway rule. This is OK and actually must happen. The prime point
# of the test is that it correctly loads the second rule, which
# would otherwise be consumed by the runaway rule.
. $srcdir/exec.sh
test_def $0 "runaway rule (unmatched percent signs) v1 version"
reset_rules
add_rule 'rule=:test %f1:word unmatched percent'
add_rule 'rule=:%field:word%'
execute 'data'
assert_output_json_eq '{"field": "data"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_string_lazy_matching.sh 0000755 0001750 0001750 00000001636 13370250152 017673 0000000 0000000 #!/bin/bash
# added 2018-06-26 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "string syntax"
reset_rules
add_rule 'version=2'
add_rule 'rule=:Rule-ID:%-:whitespace%%
f:string{"matching.permitted":[
{"class":"digit"},
{"chars":"abcdefghijklmnopqrstuvwxyz"},
{"chars":"ABCDEFGHIJKLMNOPQRSTUVWXYZ"},
{"chars":"-"},
],
"quoting.escape.mode":"none",
"matching.mode":"lazy"}%%resta:rest%'
execute 'Rule-ID: XY7azl704-84a39894783423467a33f5b48bccd23c-a0n63i2\r\nQNas: '
assert_output_json_eq '{ "resta": "\\r\\nQNas: ", "f": "XY7azl704-84a39894783423467a33f5b48bccd23c-a0n63i2" }'
execute 'Rule-ID: XY7azl704-84a39894783423467a33f5b48bccd23c-a0n63i2 LWL'
assert_output_json_eq '{ "resta": " LWL", "f": "XY7azl704-84a39894783423467a33f5b48bccd23c-a0n63i2" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_ipaddr_dotdot3.sh 0000755 0001750 0001750 00000002027 13370250152 016557 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "user-defined type with '..' name embedded in other fields"
add_rule 'version=2'
add_rule 'type=@IPaddr:%..:ipv4%'
add_rule 'type=@IPaddr:%..:ipv6%'
add_rule 'type=@ipOrNumber:%..:@IPaddr{"priority":"1000"}%'
add_rule 'type=@ipOrNumber:%..:number%'
#add_rule 'type=@ipOrNumber:%..:@IPaddr%' # if we enable this instead of the above, the test would break
add_rule 'rule=:a word %w1:word% an ip address %ip:@ipOrNumber% another word %w2:word%'
execute 'a word word1 an ip address 10.0.0.1 another word word2'
assert_output_json_eq '{ "w2": "word2", "ip": "10.0.0.1", "w1": "word1" }'
execute 'a word word1 an ip address 2001:DB8:0:1::10:1FF another word word2'
assert_output_json_eq '{ "w2": "word2", "ip": "2001:DB8:0:1::10:1FF", "w1": "word1" }'
execute 'a word word1 an ip address 111 another word word2'
assert_output_json_eq '{ "w2": "word2", "ip": "111", "w1": "word1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_tokenized_with_regex.sh 0000755 0001750 0001750 00000001570 13370250152 017672 0000000 0000000 #!/bin/bash
# added 2014-11-17 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
#test that tokenized disabled regex if parent context has it disabled
. $srcdir/exec.sh
no_solaris10
test_def $0 "tokenized field with regex based field"
add_rule 'rule=:%parts:tokenized:,:regex:[^, ]+% %text:rest%'
execute '123,abc,456,def foo bar'
assert_output_contains '"unparsed-data": "123,abc,456,def foo bar"'
assert_output_contains '"originalmsg": "123,abc,456,def foo bar"'
#and then enables it when parent context has it enabled
export ln_opts='-oallowRegex'
. $srcdir/exec.sh
test_def $0 "tokenized field with regex based field"
add_rule 'rule=:%parts:tokenized:,:regex:[^, ]+% %text:rest%'
execute '123,abc,456,def foo bar'
assert_output_contains '"parts": [ "123", "abc", "456", "def" ]'
assert_output_contains '"text": "foo bar"'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_regex_with_consume_group.sh 0000755 0001750 0001750 00000001246 13370250152 020563 0000000 0000000 #!/bin/bash
# added 2014-11-14 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
export ln_opts='-oallowRegex'
test_def $0 "regex field with consume-group"
add_rule 'rule=:%first:regex:([a-z]{2}([a-f0-9]+,)+):0%%rest:rest%'
execute 'ad1234abcd,4567ef12,8901abef'
assert_output_contains '"first": "ad1234abcd,4567ef12,"'
assert_output_contains '"rest": "8901abef"'
reset_rules
add_rule 'rule=:%first:regex:(([a-z]{2})([a-f0-9]+,)+):2%%rest:rest%'
execute 'ad1234abcd,4567ef12,8901abef'
assert_output_contains '"first": "ad"'
assert_output_contains '"rest": "1234abcd,4567ef12,8901abef"'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_twotypes.sh 0000755 0001750 0001750 00000001003 13370250152 015543 0000000 0000000 #!/bin/bash
# added 2015-10-30 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "two user-defined types"
add_rule 'version=2'
add_rule 'type=@hex-byte:%f1:hexnumber{"maxval": "255"}%'
add_rule 'type=@word-type:%w1:word%'
add_rule 'rule=:a word %.:@word-type% a byte % .:@hex-byte % another word %w2:word%'
execute 'a word w1 a byte 0xff another word w2'
assert_output_json_eq '{ "w2": "w2", "f1": "0xff", "w1": "w1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_name_value_v1.sh 0000755 0001750 0001750 00000002400 13370250152 016164 0000000 0000000 #!/bin/bash
# added 2015-04-25 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "name/value parser"
add_rule 'rule=:%f:name-value-list%'
execute 'name=value'
assert_output_json_eq '{ "f": { "name": "value" } }'
execute 'name1=value1 name2=value2 name3=value3'
assert_output_json_eq '{ "f": { "name1": "value1", "name2": "value2", "name3": "value3" } }'
execute 'name1=value1 name2=value2 name3=value3 '
assert_output_json_eq '{ "f": { "name1": "value1", "name2": "value2", "name3": "value3" } }'
execute 'name1= name2=value2 name3=value3 '
assert_output_json_eq '{ "f": { "name1": "", "name2": "value2", "name3": "value3" } }'
execute 'origin=core.action processed=67 failed=0 suspended=0 suspended.duration=0 resumed=0 '
assert_output_json_eq '{ "f": { "origin": "core.action", "processed": "67", "failed": "0", "suspended": "0", "suspended.duration": "0", "resumed": "0" } }'
# check for required non-matches
execute 'name'
assert_output_json_eq ' {"originalmsg": "name", "unparsed-data": "name" }'
execute 'noname1 name2=value2 name3=value3 '
assert_output_json_eq '{ "originalmsg": "noname1 name2=value2 name3=value3 ", "unparsed-data": "noname1 name2=value2 name3=value3 " }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_hexnumber_range.sh 0000755 0001750 0001750 00000001373 13370250152 016623 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "hexnumber field with range checks"
add_rule 'version=2'
add_rule 'rule=:here is a number %num:hexnumber{"maxval":191}% in hex form'
execute 'here is a number 0x12 in hex form'
assert_output_json_eq '{"num": "0x12"}'
execute 'here is a number 0x0 in hex form'
assert_output_json_eq '{"num": "0x0"}'
execute 'here is a number 0xBf in hex form'
assert_output_json_eq '{"num": "0xBf"}'
#check cases where parsing failure must occur
execute 'here is a number 0xc0 in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0xc0 in hex form", "unparsed-data": "0xc0 in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_name_value.sh 0000755 0001750 0001750 00000002425 13370250152 015565 0000000 0000000 #!/bin/bash
# added 2015-04-25 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "name/value parser"
add_rule 'version=2'
add_rule 'rule=:%f:name-value-list%'
execute 'name=value'
assert_output_json_eq '{ "f": { "name": "value" } }'
execute 'name1=value1 name2=value2 name3=value3'
assert_output_json_eq '{ "f": { "name1": "value1", "name2": "value2", "name3": "value3" } }'
execute 'name1=value1 name2=value2 name3=value3 '
assert_output_json_eq '{ "f": { "name1": "value1", "name2": "value2", "name3": "value3" } }'
execute 'name1= name2=value2 name3=value3 '
assert_output_json_eq '{ "f": { "name1": "", "name2": "value2", "name3": "value3" } }'
execute 'origin=core.action processed=67 failed=0 suspended=0 suspended.duration=0 resumed=0 '
assert_output_json_eq '{ "f": { "origin": "core.action", "processed": "67", "failed": "0", "suspended": "0", "suspended.duration": "0", "resumed": "0" } }'
# check for required non-matches
execute 'name'
assert_output_json_eq ' {"originalmsg": "name", "unparsed-data": "name" }'
execute 'noname1 name2=value2 name3=value3 '
assert_output_json_eq '{ "originalmsg": "noname1 name2=value2 name3=value3 ", "unparsed-data": "noname1 name2=value2 name3=value3 " }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_kernel_timestamp_v1.sh 0000755 0001750 0001750 00000003255 13370250152 017424 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "kernel timestamp parser"
add_rule 'rule=:begin %timestamp:kernel-timestamp% end'
execute 'begin [12345.123456] end'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
reset_rules
add_rule 'rule=:begin %timestamp:kernel-timestamp%'
execute 'begin [12345.123456]'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
reset_rules
add_rule 'rule=:%timestamp:kernel-timestamp%'
execute '[12345.123456]'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
execute '[154469.133028]'
assert_output_json_eq '{ "timestamp": "[154469.133028]"}'
execute '[123456789012.123456]'
assert_output_json_eq '{ "timestamp": "[123456789012.123456]"}'
#check cases where parsing failure must occur
execute '[1234.123456]'
assert_output_json_eq '{"originalmsg": "[1234.123456]", "unparsed-data": "[1234.123456]" }'
execute '[1234567890123.123456]'
assert_output_json_eq '{"originalmsg": "[1234567890123.123456]", "unparsed-data": "[1234567890123.123456]" }'
execute '[123456789012.12345]'
assert_output_json_eq '{ "originalmsg": "[123456789012.12345]", "unparsed-data": "[123456789012.12345]" }'
execute '[123456789012.1234567]'
assert_output_json_eq '{ "originalmsg": "[123456789012.1234567]", "unparsed-data": "[123456789012.1234567]" }'
execute '(123456789012.123456]'
assert_output_json_eq '{ "originalmsg": "(123456789012.123456]", "unparsed-data": "(123456789012.123456]" }'
execute '[123456789012.123456'
assert_output_json_eq '{ "originalmsg": "[123456789012.123456", "unparsed-data": "[123456789012.123456" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_ipv6.sh 0000755 0001750 0001750 00000004353 13370250152 014337 0000000 0000000 #!/bin/bash
# added 2015-06-23 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
add_rule 'version=2'
test_def $0 "IPv6 parser"
add_rule 'rule=:%f:ipv6%'
# examples from RFC4291, sect. 2.2
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345:6789'
assert_output_json_eq '{ "f": "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789" }'
execute 'ABCD:EF01:2345:6789:abcd:EF01:2345:6789' # mixed hex case
assert_output_json_eq '{ "f": "ABCD:EF01:2345:6789:abcd:EF01:2345:6789" }'
execute '2001:DB8:0:0:8:800:200C:417A'
assert_output_json_eq '{ "f": "2001:DB8:0:0:8:800:200C:417A" }'
execute '0:0:0:0:0:0:0:1'
assert_output_json_eq '{ "f": "0:0:0:0:0:0:0:1" }'
execute '2001:DB8::8:800:200C:417A'
assert_output_json_eq '{ "f": "2001:DB8::8:800:200C:417A" }'
execute 'FF01::101'
assert_output_json_eq '{ "f": "FF01::101" }'
execute '::1'
assert_output_json_eq '{ "f": "::1" }'
execute '::'
assert_output_json_eq '{ "f": "::" }'
execute '0:0:0:0:0:0:13.1.68.3'
assert_output_json_eq '{ "f": "0:0:0:0:0:0:13.1.68.3" }'
execute '::13.1.68.3'
assert_output_json_eq '{ "f": "::13.1.68.3" }'
execute '::FFFF:129.144.52.38'
assert_output_json_eq '{ "f": "::FFFF:129.144.52.38" }'
# invalid samples
execute '2001:DB8::8::800:200C:417A' # two :: sequences
assert_output_json_eq '{ "originalmsg": "2001:DB8::8::800:200C:417A", "unparsed-data": "2001:DB8::8::800:200C:417A" }'
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345::6789' # :: with too many blocks
assert_output_json_eq '{ "originalmsg": "ABCD:EF01:2345:6789:ABCD:EF01:2345::6789", "unparsed-data": "ABCD:EF01:2345:6789:ABCD:EF01:2345::6789" }'
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798' # too many blocks (9)
assert_output_json_eq '{"originalmsg": "ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798", "unparsed-data": "ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798" }'
execute ':0:0:0:0:0:0:1' # missing first digit
assert_output_json_eq '{ "originalmsg": ":0:0:0:0:0:0:1", "unparsed-data": ":0:0:0:0:0:0:1" }'
execute '0:0:0:0:0:0:0:' # missing last digit
assert_output_json_eq '{ "originalmsg": "0:0:0:0:0:0:0:", "unparsed-data": "0:0:0:0:0:0:0:" }'
execute '13.1.68.3' # pure IPv4 address
assert_output_json_eq '{ "originalmsg": "13.1.68.3", "unparsed-data": "13.1.68.3" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_kernel_timestamp.sh 0000755 0001750 0001750 00000003337 13370250152 017017 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "kernel timestamp parser"
add_rule 'version=2'
add_rule 'rule=:begin %timestamp:kernel-timestamp% end'
execute 'begin [12345.123456] end'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:begin %timestamp:kernel-timestamp%'
execute 'begin [12345.123456]'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:%timestamp:kernel-timestamp%'
execute '[12345.123456]'
assert_output_json_eq '{ "timestamp": "[12345.123456]"}'
execute '[154469.133028]'
assert_output_json_eq '{ "timestamp": "[154469.133028]"}'
execute '[123456789012.123456]'
assert_output_json_eq '{ "timestamp": "[123456789012.123456]"}'
#check cases where parsing failure must occur
execute '[1234.123456]'
assert_output_json_eq '{"originalmsg": "[1234.123456]", "unparsed-data": "[1234.123456]" }'
execute '[1234567890123.123456]'
assert_output_json_eq '{"originalmsg": "[1234567890123.123456]", "unparsed-data": "[1234567890123.123456]" }'
execute '[123456789012.12345]'
assert_output_json_eq '{ "originalmsg": "[123456789012.12345]", "unparsed-data": "[123456789012.12345]" }'
execute '[123456789012.1234567]'
assert_output_json_eq '{ "originalmsg": "[123456789012.1234567]", "unparsed-data": "[123456789012.1234567]" }'
execute '(123456789012.123456]'
assert_output_json_eq '{ "originalmsg": "(123456789012.123456]", "unparsed-data": "(123456789012.123456]" }'
execute '[123456789012.123456'
assert_output_json_eq '{ "originalmsg": "[123456789012.123456", "unparsed-data": "[123456789012.123456" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/alternative_three.sh 0000755 0001750 0001750 00000001064 13370250152 016011 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple alternative syntax"
add_rule 'version=2'
add_rule 'rule=:a %{"type":"alternative", "parser":[{"name":"num", "type":"number"}, {"name":"hex", "type":"hexnumber"}, {"name":"wrd", "type":"word"}]}% b'
execute 'a 4711 b'
assert_output_json_eq '{ "num": "4711" }'
execute 'a 0x4711 b'
assert_output_json_eq '{ "hex": "0x4711" }'
execute 'a 0xyz b'
assert_output_json_eq '{ "wrd": "0xyz" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cee-syslog_jsoncnf.sh 0000755 0001750 0001750 00000001604 13370250152 017241 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "JSON field"
add_rule 'version=2'
add_rule 'rule=:%{"name":"field", "type":"cee-syslog"}%'
execute '@cee:{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee:{"f1": "1", "f2": 2} ' # note the trailing space
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee: {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee: {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#
# Things that MUST NOT work
#
execute '@cee: {"f1": "1", "f2": 2} data'
assert_output_json_eq '{ "originalmsg": "@cee: {\"f1\": \"1\", \"f2\": 2} data", "unparsed-data": "@cee: {\"f1\": \"1\", \"f2\": 2} data" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_nested_segfault.sh 0000755 0001750 0001750 00000001046 13370250152 017030 0000000 0000000 #!/bin/bash
# added 2018-04-07 by Vincent Tondellier
# based on usrdef_twotypes.sh
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "nested user-defined types"
add_rule 'version=2'
add_rule 'type=@hex-byte:%..:hexnumber{"maxval": "255"}%'
add_rule 'type=@two-hex-bytes:%f1:@hex-byte% %f2:@hex-byte%'
add_rule 'type=@unused:stop'
add_rule 'rule=:two bytes %.:@two-hex-bytes% %-:@unused%'
execute 'two bytes 0xff 0x16 stop'
assert_output_json_eq '{ "f1": "0xff", "f2": "0x16" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_hexnumber-fmt_number.sh 0000755 0001750 0001750 00000001142 13370250152 017575 0000000 0000000 #!/bin/bash
# added 2017-10-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "hexnumber field"
add_rule 'version=2'
add_rule 'rule=:here is a number %{ "type":"hexnumber", "name":"num", "format":"number"} % in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": 4660}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_ipv6_jsoncnf.sh 0000755 0001750 0001750 00000004400 13370250152 016050 0000000 0000000 #!/bin/bash
# added 2015-06-23 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "IPv6 parser"
add_rule 'version=2'
add_rule 'rule=:%{"name":"f", "type":"ipv6"}%'
# examples from RFC4291, sect. 2.2
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345:6789'
assert_output_json_eq '{ "f": "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789" }'
execute 'ABCD:EF01:2345:6789:abcd:EF01:2345:6789' # mixed hex case
assert_output_json_eq '{ "f": "ABCD:EF01:2345:6789:abcd:EF01:2345:6789" }'
execute '2001:DB8:0:0:8:800:200C:417A'
assert_output_json_eq '{ "f": "2001:DB8:0:0:8:800:200C:417A" }'
execute '0:0:0:0:0:0:0:1'
assert_output_json_eq '{ "f": "0:0:0:0:0:0:0:1" }'
execute '2001:DB8::8:800:200C:417A'
assert_output_json_eq '{ "f": "2001:DB8::8:800:200C:417A" }'
execute 'FF01::101'
assert_output_json_eq '{ "f": "FF01::101" }'
execute '::1'
assert_output_json_eq '{ "f": "::1" }'
execute '::'
assert_output_json_eq '{ "f": "::" }'
execute '0:0:0:0:0:0:13.1.68.3'
assert_output_json_eq '{ "f": "0:0:0:0:0:0:13.1.68.3" }'
execute '::13.1.68.3'
assert_output_json_eq '{ "f": "::13.1.68.3" }'
execute '::FFFF:129.144.52.38'
assert_output_json_eq '{ "f": "::FFFF:129.144.52.38" }'
# invalid samples
execute '2001:DB8::8::800:200C:417A' # two :: sequences
assert_output_json_eq '{ "originalmsg": "2001:DB8::8::800:200C:417A", "unparsed-data": "2001:DB8::8::800:200C:417A" }'
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345::6789' # :: with too many blocks
assert_output_json_eq '{ "originalmsg": "ABCD:EF01:2345:6789:ABCD:EF01:2345::6789", "unparsed-data": "ABCD:EF01:2345:6789:ABCD:EF01:2345::6789" }'
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798' # too many blocks (9)
assert_output_json_eq '{"originalmsg": "ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798", "unparsed-data": "ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798" }'
execute ':0:0:0:0:0:0:1' # missing first digit
assert_output_json_eq '{ "originalmsg": ":0:0:0:0:0:0:1", "unparsed-data": ":0:0:0:0:0:0:1" }'
execute '0:0:0:0:0:0:0:' # missing last digit
assert_output_json_eq '{ "originalmsg": "0:0:0:0:0:0:0:", "unparsed-data": "0:0:0:0:0:0:0:" }'
execute '13.1.68.3' # pure IPv4 address
assert_output_json_eq '{ "originalmsg": "13.1.68.3", "unparsed-data": "13.1.68.3" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_ipaddr_dotdot.sh 0000755 0001750 0001750 00000001121 13370250152 016466 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "user-defined type with '..' name"
add_rule 'version=2'
add_rule 'type=@IPaddr:%..:ipv4%'
add_rule 'type=@IPaddr:%..:ipv6%'
add_rule 'rule=:an ip address %ip:@IPaddr%'
execute 'an ip address 10.0.0.1'
assert_output_json_eq '{ "ip": "10.0.0.1" }'
execute 'an ip address 127::1'
assert_output_json_eq '{ "ip": "127::1" }'
execute 'an ip address 2001:DB8:0:1::10:1FF'
assert_output_json_eq '{ "ip": "2001:DB8:0:1::10:1FF" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/lognormalizer-invld-call.sh 0000755 0001750 0001750 00000000520 13370250152 017207 0000000 0000000 #!/bin/bash
# This file is part of the liblognorm project, released under ASL 2.0
echo running test $0
if ../src/lognormalizer ; then
echo "FAIL: loganalyzer did not detect missing rulebase"
exit 1
fi
if ../src/lognormalizer -r test -R test ; then
echo "FAIL: loganalyzer did not detect both -r and -R given"
exit 1
fi
liblognorm-2.0.6/tests/strict_prefix_actual_sample1.sh 0000755 0001750 0001750 00000001621 13370250152 020143 0000000 0000000 #!/bin/bash
# added 2015-11-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "one rule is strict prefix of a longer one"
add_rule 'version=2'
add_rule 'prefix=%timestamp:date-rfc3164% %hostname:word% BL-WLC01: *%programname:char-to:\x3a%: %timestamp:date-rfc3164%.%fracsec:number%:'
add_rule 'rule=wifi: #LOG-3-Q_IND: webauth_redirect.c:1238 read error on server socket, errno=131[...It occurred %count:number% times.!]'
add_rule 'rule=wifi: #LOG-3-Q_IND: webauth_redirect.c:1238 read error on server socket, errno=131'
execute 'Sep 28 23:53:19 192.168.123.99 BL-WLC01: *dtlArpTask: Sep 28 23:53:19.614: #LOG-3-Q_IND: webauth_redirect.c:1238 read error on server socket, errno=131'
assert_output_json_eq '{ "fracsec": "614", "timestamp": "Sep 28 23:53:19", "programname": "dtlArpTask", "hostname": "192.168.123.99" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_checkpoint-lea_v1.sh 0000755 0001750 0001750 00000000553 13370250152 016745 0000000 0000000 #!/bin/bash
# added 2015-06-18 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "Checkpoint LEA parser"
add_rule 'rule=:%f:checkpoint-lea%'
execute 'tcp_flags: RST-ACK; src: 192.168.0.1;'
assert_output_json_eq '{ "f": { "tcp_flags": "RST-ACK", "src": "192.168.0.1" } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_v2-iptables_jsoncnf.sh 0000755 0001750 0001750 00000005044 13370250152 017321 0000000 0000000 #!/bin/bash
# added 2015-04-30 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "v2-iptables field"
add_rule 'version=2'
add_rule 'rule=:iptables output denied: %{"name":"field", "type":"v2-iptables"}%'
# first, a real-world case
execute 'iptables output denied: IN= OUT=eth0 SRC=176.9.56.141 DST=168.192.14.3 LEN=32 TOS=0x00 PREC=0x00 TTL=64 ID=39110 DF PROTO=UDP SPT=49564 DPT=2010 LEN=12'
assert_output_json_eq '{ "field": { "IN": "", "OUT": "eth0", "SRC": "176.9.56.141", "DST": "168.192.14.3", "LEN": "12", "TOS": "0x00", "PREC": "0x00", "TTL": "64", "ID": "39110", "DF": null, "PROTO": "UDP", "SPT": "49564", "DPT": "2010" } }'
# now some more "fabricated" cases for better readable test
reset_rules
add_rule 'version=2'
add_rule 'rule=:iptables: %field:v2-iptables%'
execute 'iptables: IN=value SECOND=test'
assert_output_json_eq '{ "field": { "IN": "value", "SECOND": "test" }} }'
execute 'iptables: IN= SECOND=test'
assert_output_json_eq '{ "field": { "IN": ""} }'
execute 'iptables: IN SECOND=test'
assert_output_json_eq '{ "field": { "IN": null} }'
execute 'iptables: IN=invalue OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": "invalue", "OUT": "outvalue" } }'
execute 'iptables: IN= OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": "", "OUT": "outvalue" } }'
execute 'iptables: IN OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": null, "OUT": "outvalue" } }'
#
#check cases where parsing failure must occur
#
echo verify failure cases
# lower case is not permitted
execute 'iptables: in=value'
assert_output_json_eq '{ "originalmsg": "iptables: in=value", "unparsed-data": "in=value" }'
execute 'iptables: in='
assert_output_json_eq '{ "originalmsg": "iptables: in=", "unparsed-data": "in=" }'
execute 'iptables: in'
assert_output_json_eq '{ "originalmsg": "iptables: in", "unparsed-data": "in" }'
execute 'iptables: IN' # single field is NOT permitted!
assert_output_json_eq '{ "originalmsg": "iptables: IN", "unparsed-data": "IN" }'
# multiple spaces between n=v pairs are not permitted
execute 'iptables: IN=invalue OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN=invalue OUT=outvalue", "unparsed-data": "IN=invalue OUT=outvalue" }'
execute 'iptables: IN= OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN= OUT=outvalue", "unparsed-data": "IN= OUT=outvalue" }'
execute 'iptables: IN OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN OUT=outvalue", "unparsed-data": "IN OUT=outvalue" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_actual1.sh 0000755 0001750 0001750 00000002236 13370250152 015210 0000000 0000000 #!/bin/bash
# added 2015-10-30 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "an actual test case for user-defined types"
add_rule 'version=2'
add_rule 'type=@endpid:%{"type":"alternative","parser":[ {"type": "literal", "text":"]"},{"type": "literal", "text":"]:"} ] }%'
add_rule 'type=@AUTOTYPE1:%iface:char-to:/%/%ip:ipv4%(%port:number%)'
add_rule 'type=@AUTOTYPE1:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number%'
add_rule 'type=@AUTOTYPE1:%iface:char-to:\x3a%\x3a%ip:ipv4%'
add_rule 'rule=:a pid[%pid:number%%-:@endpid% b'
add_rule 'rule=:a iface %.:@AUTOTYPE1% b'
execute 'a pid[4711] b'
assert_output_json_eq '{ "pid": "4711" }'
# the next text needs priority assignment
#execute 'a pid[4712]: b'
#assert_output_json_eq '{ "pid": "4712" }'
execute 'a iface inside:10.0.0.1 b'
assert_output_json_eq '{ "ip": "10.0.0.1", "iface": "inside" }'
execute 'a iface inside:10.0.0.1/514 b'
assert_output_json_eq '{ "port": "514", "ip": "10.0.0.1", "iface": "inside" }'
execute 'a iface inside/10.0.0.1(514) b'
assert_output_json_eq '{ "port": "514", "ip": "10.0.0.1", "iface": "inside" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/parser_whitespace_jsoncnf.sh 0000755 0001750 0001750 00000001257 13370250152 017540 0000000 0000000 #!/bin/bash
# added 2015-07-15 by Rainer Gerhards
# This checks if whitespace inside parser definitions is properly treated
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "whitespace in parser definition"
add_rule 'version=2'
add_rule 'rule=:here is a number % {"name":"num", "type":"hexnumber"} % in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_name_value_jsoncnf.sh 0000755 0001750 0001750 00000002452 13370250152 017305 0000000 0000000 #!/bin/bash
# added 2015-04-25 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "name/value parser"
add_rule 'version=2'
add_rule 'rule=:%{"name":"f", "type":"name-value-list"}%'
execute 'name=value'
assert_output_json_eq '{ "f": { "name": "value" } }'
execute 'name1=value1 name2=value2 name3=value3'
assert_output_json_eq '{ "f": { "name1": "value1", "name2": "value2", "name3": "value3" } }'
execute 'name1=value1 name2=value2 name3=value3 '
assert_output_json_eq '{ "f": { "name1": "value1", "name2": "value2", "name3": "value3" } }'
execute 'name1= name2=value2 name3=value3 '
assert_output_json_eq '{ "f": { "name1": "", "name2": "value2", "name3": "value3" } }'
execute 'origin=core.action processed=67 failed=0 suspended=0 suspended.duration=0 resumed=0 '
assert_output_json_eq '{ "f": { "origin": "core.action", "processed": "67", "failed": "0", "suspended": "0", "suspended.duration": "0", "resumed": "0" } }'
# check for required non-matches
execute 'name'
assert_output_json_eq ' {"originalmsg": "name", "unparsed-data": "name" }'
execute 'noname1 name2=value2 name3=value3 '
assert_output_json_eq '{ "originalmsg": "noname1 name2=value2 name3=value3 ", "unparsed-data": "noname1 name2=value2 name3=value3 " }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_rest_v1.sh 0000755 0001750 0001750 00000003564 13370250152 015041 0000000 0000000 #!/bin/bash
# some more tests for the "rest" motif, especially to ensure that
# "rest" will not interfere with more specific rules.
# added 2015-04-27
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "rest matches"
#tail recursion with default tail field
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number% (%label2:char-to:)%)'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number% (%label2:char-to:)%)%tail:rest%'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number%'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number%%tail:rest%'
# real-world cisco samples
execute 'Outside:10.20.30.40/35 (40.30.20.10/35)'
assert_output_json_eq '{ "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 (40.30.20.10/35) with rest'
assert_output_json_eq '{ "tail": " with rest", "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 (40.30.20.10/35 brace missing'
assert_output_json_eq '{ "tail": " (40.30.20.10\/35 brace missing", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 40.30.20.10/35'
assert_output_json_eq '{ "tail": " 40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
#
# test expected mismatches
#
execute 'not at all!'
assert_output_json_eq '{ "originalmsg": "not at all!", "unparsed-data": "not at all!" }'
execute 'Outside 10.20.30.40/35 40.30.20.10/35'
assert_output_json_eq '{ "originalmsg": "Outside 10.20.30.40\/35 40.30.20.10\/35", "unparsed-data": "Outside 10.20.30.40\/35 40.30.20.10\/35" }'
execute 'Outside:10.20.30.40/aa 40.30.20.10/35'
assert_output_json_eq '{ "originalmsg": "Outside:10.20.30.40\/aa 40.30.20.10\/35", "unparsed-data": "aa 40.30.20.10\/35" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_whitespace_v1.sh 0000755 0001750 0001750 00000002127 13370250152 016212 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "whitespace parser"
# the "word" parser unfortunatly treats everything except
# a SP as being in the word. So a HT inside a word is
# permitted, which does not work well with what we
# want to test here. to solve this problem, we use op-quoted-string.
# However, we must actually quote the samples with HT, because
# that parser also treats HT as being part of the word. But thanks
# to the quotes, we can force it to not do that.
# rgerhards, 2015-04-30
add_rule 'rule=:%a:op-quoted-string%%-:whitespace%%b:op-quoted-string%'
execute 'word1 word2' # multiple spaces
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute 'word1 word2' # single space
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute '"word1" "word2"' # tab (US-ASCII HT)
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute '"word1" "word2"' # mix of tab and spaces
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/runaway_rule_comment_v1.sh 0000755 0001750 0001750 00000001204 13370250152 017145 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
# Note that this test produces an error message, as it encouters the
# runaway rule. This is OK and actually must happen. The prime point
# of the test is that it correctly loads the second rule, which
# would otherwise be consumed by the runaway rule.
. $srcdir/exec.sh
test_def $0 "runaway rule with comment lines (v1)"
reset_rules
add_rule 'rule=:test %f1:word unmatched percent'
add_rule ''
add_rule '#comment'
add_rule 'rule=:%field:word%'
execute 'data'
assert_output_json_eq '{"field": "data"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_ipaddr_dotdot2.sh 0000755 0001750 0001750 00000001304 13370250152 016553 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "user-defined type with '..' name embedded in other fields"
add_rule 'version=2'
add_rule 'type=@IPaddr:%..:ipv4%'
add_rule 'type=@IPaddr:%..:ipv6%'
add_rule 'rule=:a word %w1:word% an ip address %ip:@IPaddr% another word %w2:word%'
execute 'a word word1 an ip address 10.0.0.1 another word word2'
assert_output_json_eq '{ "w2": "word2", "ip": "10.0.0.1", "w1": "word1" }'
execute 'a word word1 an ip address 2001:DB8:0:1::10:1FF another word word2'
assert_output_json_eq '{ "w2": "word2", "ip": "2001:DB8:0:1::10:1FF", "w1": "word1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_regex_with_consume_group_and_return_group.sh 0000755 0001750 0001750 00000001276 13370250152 024223 0000000 0000000 #!/bin/bash
# added 2014-11-14 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
export ln_opts='-oallowRegex'
no_solaris10
test_def $0 "regex field with consume-group and return-group"
set -x
add_rule 'rule=:%first:regex:[a-z]{2}(([a-f0-9]+),)+:0:2%%rest:rest%'
execute 'ad1234abcd,4567ef12,8901abef'
assert_output_contains '"first": "4567ef12"'
assert_output_contains '"rest": "8901abef"'
reset_rules
add_rule 'rule=:%first:regex:(([a-z]{2})(([a-f0-9]+),)+):2:4%%rest:rest%'
execute 'ad1234abcd,4567ef12,8901abef'
assert_output_contains '"first": "4567ef12"'
assert_output_contains '"rest": "1234abcd,4567ef12,8901abef"'
cleanup_tmp_files
liblognorm-2.0.6/tests/repeat_alternative_nested.sh 0000755 0001750 0001750 00000002402 13370250152 017521 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple alternative syntax"
add_rule 'version=2'
add_rule 'rule=:a %{"name":"numbers", "type":"repeat",
"parser":
{ "type":"alternative", "parser": [
[ {"type":"number", "name":"n1"},
{"type":"literal", "text":":"},
{"type":"number", "name":"n2"},
],
{"type":"hexnumber", "name":"hex"}
]
},
"while":[
{"type":"literal", "text":", "}
]
}% b'
execute 'a 1:2, 3:4, 5:6, 7:8 b'
assert_output_json_eq '{ "numbers": [ { "n2": "2", "n1": "1" }, { "n2": "4", "n1": "3" }, { "n2": "6", "n1": "5" }, { "n2": "8", "n1": "7" } ] }'
execute 'a 0x4711 b'
assert_output_json_eq '{ "numbers": [ { "hex": "0x4711" } ] }'
# note: 0x4711, 1:2 does not work because hexnumber expects a SP after
# the number! Thus we use the reverse. We could add this case once
# we have added an option for more relaxed matching to hexnumber.
execute 'a 1:2, 0x4711 b'
assert_output_json_eq '{ "numbers": [ { "n2": "2", "n1": "1" }, { "hex": "0x4711" } ] }'
cleanup_tmp_files
liblognorm-2.0.6/tests/rule_last_str_short.sh 0000755 0001750 0001750 00000000367 13370250152 016412 0000000 0000000 #!/bin/bash
. $srcdir/exec.sh
test_def $0 "string being last in a rule (see also: rule_last_str_long.sh)"
add_rule 'version=2'
add_rule 'rule=:%string:string%'
execute 'string'
assert_output_json_eq '{"string": "string" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/repeat_very_simple.sh 0000755 0001750 0001750 00000001036 13370250152 016201 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "very simple repeat syntax"
add_rule 'version=2'
add_rule 'rule=:a %{"name":"numbers", "type":"repeat",
"parser":
{"name":"n", "type":"number"},
"while":
{"type":"literal", "text":", "}
}% b %w:word%
'
execute 'a 1, 2, 3, 4 b test'
assert_output_json_eq '{ "w": "test", "numbers": [ { "n": "1" }, { "n": "2" }, { "n": "3" }, { "n": "4" } ] }'
cleanup_tmp_files
liblognorm-2.0.6/tests/parser_LF.sh 0000755 0001750 0001750 00000001245 13370250152 014162 0000000 0000000 #!/bin/bash
# added 2015-07-15 by Rainer Gerhards
# This checks if whitespace inside parser definitions is properly treated
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "LF in parser definition"
add_rule 'rule=:here is a number %
num:hexnumber
% in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_hexnumber.sh 0000755 0001750 0001750 00000001074 13370250152 015445 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "hexnumber field"
add_rule 'version=2'
add_rule 'rule=:here is a number %num:hexnumber% in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_regex_invalid_args.sh 0000755 0001750 0001750 00000002305 13370250152 017302 0000000 0000000 #!/bin/bash
# added 2014-11-14 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
export ln_opts='-oallowRegex'
. $srcdir/exec.sh
test_def $0 "invalid type for regex field with everything else defaulted"
add_rule 'rule=:%first:regex:[a-z]+:Q%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
reset_rules
add_rule 'rule=:%first:regex:[a-z]+:%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
reset_rules
add_rule 'rule=:%first:regex:[a-z]+:0:%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
reset_rules
add_rule 'rule=:%first:regex:[a-z]+:0:0q%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
reset_rules
add_rule 'rule=:%first:regex:[a-z]+:0a:0%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
reset_rules
add_rule 'rule=:%first:regex:::::%%%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_v2-iptables.sh 0000755 0001750 0001750 00000004772 13370250152 015610 0000000 0000000 #!/bin/bash
# added 2015-04-30 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "v2-iptables field"
add_rule 'version=2'
add_rule 'rule=:iptables output denied: %field:v2-iptables%'
# first, a real-world case
execute 'iptables output denied: IN= OUT=eth0 SRC=176.9.56.141 DST=168.192.14.3 LEN=32 TOS=0x00 PREC=0x00 TTL=64 ID=39110 DF PROTO=UDP SPT=49564 DPT=2010 LEN=12'
assert_output_json_eq '{ "field": { "IN": "", "OUT": "eth0", "SRC": "176.9.56.141", "DST": "168.192.14.3", "LEN": "12", "TOS": "0x00", "PREC": "0x00", "TTL": "64", "ID": "39110", "DF": null, "PROTO": "UDP", "SPT": "49564", "DPT": "2010" } }'
# now some more "fabricated" cases for better readable test
reset_rules
add_rule 'rule=:iptables: %field:v2-iptables%'
execute 'iptables: IN=value SECOND=test'
assert_output_json_eq '{ "field": { "IN": "value", "SECOND": "test" }} }'
execute 'iptables: IN= SECOND=test'
assert_output_json_eq '{ "field": { "IN": ""} }'
execute 'iptables: IN SECOND=test'
assert_output_json_eq '{ "field": { "IN": null} }'
execute 'iptables: IN=invalue OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": "invalue", "OUT": "outvalue" } }'
execute 'iptables: IN= OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": "", "OUT": "outvalue" } }'
execute 'iptables: IN OUT=outvalue'
assert_output_json_eq '{ "field": { "IN": null, "OUT": "outvalue" } }'
#
#check cases where parsing failure must occur
#
echo verify failure cases
# lower case is not permitted
execute 'iptables: in=value'
assert_output_json_eq '{ "originalmsg": "iptables: in=value", "unparsed-data": "in=value" }'
execute 'iptables: in='
assert_output_json_eq '{ "originalmsg": "iptables: in=", "unparsed-data": "in=" }'
execute 'iptables: in'
assert_output_json_eq '{ "originalmsg": "iptables: in", "unparsed-data": "in" }'
execute 'iptables: IN' # single field is NOT permitted!
assert_output_json_eq '{ "originalmsg": "iptables: IN", "unparsed-data": "IN" }'
# multiple spaces between n=v pairs are not permitted
execute 'iptables: IN=invalue OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN=invalue OUT=outvalue", "unparsed-data": "IN=invalue OUT=outvalue" }'
execute 'iptables: IN= OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN= OUT=outvalue", "unparsed-data": "IN= OUT=outvalue" }'
execute 'iptables: IN OUT=outvalue'
assert_output_json_eq '{ "originalmsg": "iptables: IN OUT=outvalue", "unparsed-data": "IN OUT=outvalue" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_descent.sh 0000755 0001750 0001750 00000010273 13370250152 015076 0000000 0000000 #!/bin/bash
# added 2014-12-11 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "descent based parsing field"
#descent with default tail field
add_rule 'rule=:blocked on %device:word% %net:descent:./child.rulebase%at %tm:date-rfc5424%'
reset_rules 'child'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%' 'child'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %tail:rest%' 'child'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
#descent with tail field being explicitly named 'tail'
reset_rules
add_rule 'rule=:blocked on %device:word% %net:descent:./field.rulebase:tail%at %tm:date-rfc5424%'
reset_rules 'field'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%' 'field'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %tail:rest%' 'field'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
#descent with tail field having arbirary name
reset_rules
add_rule 'rule=:blocked on %device:word% %net:descent:./subset.rulebase:remaining%at %tm:date-rfc5424%'
reset_rules 'subset'
add_rule 'rule=:%ip_addr:ipv4% %remaining:rest%' 'subset'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %remaining:rest%' 'subset'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
#head call handling with with separate rulebase and tail field with with arbitrary name (this is what recursive field can't do)
reset_rules
add_rule 'rule=:%net:descent:./alt.rulebase:remains%blocked on %device:word%'
reset_rules 'alt'
add_rule 'rule=:%ip_addr:ipv4% %remains:rest%' 'alt'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %remains:rest%' 'alt'
execute '10.20.30.40 blocked on gw-1'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}}'
execute '10.20.30.40/16 blocked on gw-1'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}}'
#descent-field which calls another descent-field
reset_rules
add_rule 'rule=:%op:descent:./op.rulebase:rest% on %device:word%'
reset_rules 'op'
add_rule 'rule=:%net:descent:./alt.rulebase:remains%%action:word%%rest:rest%' 'op'
reset_rules 'alt'
add_rule 'rule=:%ip_addr:ipv4% %remains:rest%' 'alt'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %remains:rest%' 'alt'
execute '10.20.30.40 blocked on gw-1'
assert_output_json_eq '{"op": {"action": "blocked", "net": {"ip_addr": "10.20.30.40"}}, "device": "gw-1"}'
execute '10.20.30.40/16 unblocked on gw-2'
assert_output_json_eq '{"op": {"action": "unblocked", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}}, "device": "gw-2"}'
#descent with file name having lognorm special char
add_rule 'rule=:blocked on %device:word% %net:descent:./part\x3anet.rulebase%at %tm:date-rfc5424%'
reset_rules 'part:net'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%' 'part:net'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %tail:rest%' 'part:net'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_duration.sh 0000755 0001750 0001750 00000001426 13370250152 015276 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "duration syntax"
add_rule 'version=2'
add_rule 'rule=:duration %field:duration% bytes'
add_rule 'rule=:duration %field:duration%'
execute 'duration 0:00:42 bytes'
assert_output_json_eq '{"field": "0:00:42"}'
execute 'duration 0:00:42'
assert_output_json_eq '{"field": "0:00:42"}'
execute 'duration 9:00:42 bytes'
assert_output_json_eq '{"field": "9:00:42"}'
execute 'duration 00:00:42 bytes'
assert_output_json_eq '{"field": "00:00:42"}'
execute 'duration 37:59:42 bytes'
assert_output_json_eq '{"field": "37:59:42"}'
execute 'duration 37:60:42 bytes'
assert_output_contains '"unparsed-data": "37:60:42 bytes"'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_descent_with_invalid_ruledef.sh 0000755 0001750 0001750 00000005107 13370250152 021345 0000000 0000000 #!/bin/bash
# added 2014-12-15 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "descent based parsing field, with invalid ruledef"
#invalid parent field name
add_rule 'rule=:%net:desce%'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#no args
add_rule 'rule=:%net:descent%'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#incorrect rulebase file path
rm -f $srcdir/quux.rulebase
add_rule 'rule=:%net:descent:./quux.rulebase%'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#invalid content in rulebase file
reset_rules
add_rule 'rule=:%net:descent:./child.rulebase%'
reset_rules 'child'
add_rule 'rule=:%ip_addr:ipv4 %tail:rest%' 'child'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#empty child rulebase file
reset_rules
add_rule 'rule=:%net:descent:./child.rulebase%'
reset_rules 'child'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#no rulebase given
reset_rules
add_rule 'rule=:%net:descent:'
reset_rules 'child'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#no rulebase and no tail-field given
reset_rules
add_rule 'rule=:%net:descent::'
reset_rules 'child'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#no rulebase given, but has valid tail-field
reset_rules
add_rule 'rule=:%net:descent::foo'
reset_rules 'child'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
#empty tail-field given
echo empty tail-field given
rm tmp.rulebase
reset_rules
add_rule 'rule=:A%net:descent:./child.rulebase:%'
reset_rules 'child'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%' 'child'
execute 'A10.20.30.40 foo'
assert_output_json_eq '{ "net": { "tail": "foo", "ip_addr": "10.20.30.40" } }'
#named tail-field not populated
echo tail-field not populated
reset_rules
add_rule 'rule=:%net:descent:./child.rulebase:foo% foo'
reset_rules 'child'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%' 'child'
execute '10.20.30.40 foo'
assert_output_json_eq '{ "originalmsg": "10.20.30.40 foo", "unparsed-data": "10.20.30.40 foo" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_number.sh 0000755 0001750 0001750 00000001047 13370250152 014740 0000000 0000000 #!/bin/bash
# added 2017-10-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "number field"
add_rule 'version=2'
add_rule 'rule=:here is a number %num:number% in dec form'
execute 'here is a number 1234 in dec form'
assert_output_json_eq '{"num": "1234"}'
#check cases where parsing failure must occur
execute 'here is a number 1234in dec form'
assert_output_json_eq '{ "originalmsg": "here is a number 1234in dec form", "unparsed-data": "in dec form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_string_doc_sample_lazy.sh 0000755 0001750 0001750 00000000656 13370250152 020210 0000000 0000000 #!/bin/bash
# added 2018-06-26 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "string syntax"
reset_rules
add_rule 'version=2'
add_rule 'rule=:%f:string{"matching.permitted":[ {"class":"digit"} ], "matching.mode":"lazy"}
%%r:rest%'
execute '12:34 56'
assert_output_json_eq '{ "r": ":34 56", "f": "12" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/rule_last_str_long.sh 0000755 0001750 0001750 00000001251 13370250152 016203 0000000 0000000 #!/bin/bash
. $srcdir/exec.sh
no_solaris10
test_def $0 "multiple formats including string (see also: rule_last_str_short.sh)"
add_rule 'version=2'
add_rule 'rule=:%string:string%'
add_rule 'rule=:before %string:string%'
add_rule 'rule=:%string:string% after'
add_rule 'rule=:before %string:string% after'
add_rule 'rule=:before %string:string% middle %string:string%'
execute 'string'
execute 'before string'
execute 'string after'
execute 'before string after'
execute 'before string middle string'
assert_output_json_eq '{"string": "string" }' '{"string": "string" }''{"string": "string" }''{"string": "string" }''{"string": "string", "string": "string" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cisco-interface-spec.sh 0000755 0001750 0001750 00000004554 13370250152 017444 0000000 0000000 #!/bin/bash
# added 2015-04-13 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "cisco-interface-spec syntax"
add_rule 'rule=:begin %field:cisco-interface-spec% end'
execute 'begin outside:176.97.252.102/50349 end'
assert_output_json_eq '{"field": { "interface": "outside", "ip": "176.97.252.102", "port": "50349" } }'
execute 'begin outside:176.97.252.102/50349(DOMAIN\rainer) end'
# we need to add the backslash escape for the testbench plumbing
assert_output_json_eq '{"field": { "interface": "outside", "ip": "176.97.252.102", "port": "50349", "user": "DOMAIN\\rainer" } }'
execute 'begin outside:176.97.252.102/50349(test/rainer) end'
# we need to add the backslash escape for the testbench plumbing
assert_output_json_eq '{"field": { "interface": "outside", "ip": "176.97.252.102", "port": "50349", "user": "test/rainer" } }'
execute 'begin outside:176.97.252.102/50349(rainer) end'
# we need to add the backslash escape for the testbench plumbing
assert_output_json_eq '{"field": { "interface": "outside", "ip": "176.97.252.102", "port": "50349", "user": "rainer" } }'
execute 'begin outside:192.168.1.13/50179 (192.168.1.13/50179)(LOCAL\some.user) end'
assert_output_json_eq ' { "field": { "interface": "outside", "ip": "192.168.1.13", "port": "50179", "ip2": "192.168.1.13", "port2": "50179", "user": "LOCAL\\some.user" } }'
execute 'begin outside:192.168.1.13/50179 (192.168.1.13/50179) (LOCAL\some.user) end'
assert_output_json_eq ' { "field": { "interface": "outside", "ip": "192.168.1.13", "port": "50179", "ip2": "192.168.1.13", "port2": "50179", "user": "LOCAL\\some.user" } }'
execute 'begin 192.168.1.13/50179 (192.168.1.13/50179) (LOCAL\without.if) end'
assert_output_json_eq ' { "field": { "ip": "192.168.1.13", "port": "50179", "ip2": "192.168.1.13", "port2": "50179", "user": "LOCAL\\without.if" } }'
#
# Test for things that MUST NOT match!
#
# the SP before the second IP is missing:
execute 'begin outside:192.168.1.13/50179(192.168.1.13/50179)(LOCAL\some.user) end'
# note: the expected result looks a bit strange. This is the case because we
# cannot (yet?) detect that "(192.168.1.13/50179)" is not a valid user name.
assert_output_json_eq '{ "originalmsg": "begin outside:192.168.1.13\/50179(192.168.1.13\/50179)(LOCAL\\some.user) end", "unparsed-data": "(LOCAL\\some.user) end" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cee-syslog.sh 0000755 0001750 0001750 00000001557 13370250152 015530 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "JSON field"
add_rule 'version=2'
add_rule 'rule=:%field:cee-syslog%'
execute '@cee:{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee:{"f1": "1", "f2": 2} ' # note the trailing space
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee: {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee: {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#
# Things that MUST NOT work
#
execute '@cee: {"f1": "1", "f2": 2} data'
assert_output_json_eq '{ "originalmsg": "@cee: {\"f1\": \"1\", \"f2\": 2} data", "unparsed-data": "@cee: {\"f1\": \"1\", \"f2\": 2} data" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/names.sh 0000755 0001750 0001750 00000002237 13370250152 013412 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "using names with literal"
add_rule 'version=2'
add_rule 'rule=:a %[{"name":"num", "type":"number"}, {"type":"literal", "text":":"}, {"name":"hex", "type":"hexnumber"}]% b'
execute 'a 4711:0x4712 b'
assert_output_json_eq '{ "hex": "0x4712", "num": "4711" }'
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %[{"name":"num", "type":"number"}, {"name":"literal", "type":"literal", "text":":"}, {"name":"hex", "type":"hexnumber"}]% b'
execute 'a 4711:0x4712 b'
assert_output_json_eq '{ "hex": "0x4712", "num": "4711" }'
# check that "-" is still discarded
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %[{"name":"num", "type":"number"}, {"name":"-", "type":"literal", "text":":"}, {"name":"hex", "type":"hexnumber"}]% b'
execute 'a 4711:0x4712 b'
assert_output_json_eq '{ "hex": "0x4712", "num": "4711" }'
# now let's check old style. Here we need "-".
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %-:number%:%hex:hexnumber% b'
execute 'a 4711:0x4712 b'
assert_output_json_eq '{ "hex": "0x4712" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_json_jsoncnf.sh 0000755 0001750 0001750 00000004105 13370250152 016137 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "JSON field"
add_rule 'version=2'
add_rule 'rule=:%{"name":"field", "type":"json"}%'
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# let's see if something more complicated still works, so ADD some
# more rules
add_rule 'rule=:begin %field:json%'
add_rule 'rule=:begin %field:json%end'
add_rule 'rule=:%field:json%end'
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#check if trailinge whitspace is ignored
execute '{"f1": "1", "f2": 2} '
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2}end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# note: the parser takes all whitespace after the JSON
# to be part of it!
execute 'begin {"f1": "1", "f2": 2} end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2} end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '{"f1": "1", "f2": 2}end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#check cases where parsing failure must occur
execute '{"f1": "1", f2: 2}'
assert_output_json_eq '{ "originalmsg": "{\"f1\": \"1\", f2: 2}", "unparsed-data": "{\"f1\": \"1\", f2: 2}" }'
#some more complex cases
add_rule 'rule=:%field1:json%-%field2:json%'
execute '{"f1": "1"}-{"f2": 2}'
assert_output_json_eq '{ "field2": { "f2": 2 }, "field1": { "f1": "1" } }'
# re-check previsous def still works
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# now check some strange cases
reset_rules
add_rule 'version=2'
add_rule 'rule=:%field:json%'
# this check is because of bug in json-c:
# https://github.com/json-c/json-c/issues/181
execute '15:00'
assert_output_json_eq '{ "originalmsg": "15:00", "unparsed-data": "15:00" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_whitespace.sh 0000755 0001750 0001750 00000002154 13370250152 015604 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "whitespace parser"
# the "word" parser unfortunatly treats everything except
# a SP as being in the word. So a HT inside a word is
# permitted, which does not work well with what we
# want to test here. to solve this problem, we use op-quoted-string.
# However, we must actually quote the samples with HT, because
# that parser also treats HT as being part of the word. But thanks
# to the quotes, we can force it to not do that.
# rgerhards, 2015-04-30
add_rule 'version=2'
add_rule 'rule=:%a:op-quoted-string%%-:whitespace%%b:op-quoted-string%'
execute 'word1 word2' # multiple spaces
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute 'word1 word2' # single space
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute '"word1" "word2"' # tab (US-ASCII HT)
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute '"word1" "word2"' # mix of tab and spaces
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_number_maxval.sh 0000755 0001750 0001750 00000001132 13370250152 016303 0000000 0000000 #!/bin/bash
# added 2017-10-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "number field with maxval"
add_rule 'version=2'
add_rule 'rule=:here is a number %{ "type":"number", "name":"num", "maxval":1000}% in dec form'
execute 'here is a number 234 in dec form'
assert_output_json_eq '{"num": "234"}'
#check cases where parsing failure must occur
execute 'here is a number 1234in dec form'
assert_output_json_eq '{ "originalmsg": "here is a number 1234in dec form", "unparsed-data": "1234in dec form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_regex_with_negation.sh 0000755 0001750 0001750 00000001174 13370250152 017502 0000000 0000000 #!/bin/bash
# added 2014-11-17 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
export ln_opts='-oallowRegex'
test_def $0 "regex field with negation"
add_rule 'rule=:%text:regex:[^,]+%,%more:rest%'
execute '123,abc'
assert_output_contains '"text": "123"'
assert_output_contains '"more": "abc"'
reset_rules
add_rule 'rule=:%text:regex:([^ ,|]+( |\||,)?)+%%more:rest%'
execute '123 abc|456 789,def|ghi,jkl| and some more text'
assert_output_contains '"text": "123 abc|456 789,def|ghi,jkl|"'
assert_output_contains '"more": " and some more text"'
cleanup_tmp_files
liblognorm-2.0.6/tests/parser_whitespace.sh 0000755 0001750 0001750 00000001221 13370250152 016007 0000000 0000000 #!/bin/bash
# added 2015-07-15 by Rainer Gerhards
# This checks if whitespace inside parser definitions is properly treated
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "whitespace in parser definition"
add_rule 'rule=:here is a number % num:hexnumber % in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_whitespace_jsoncnf.sh 0000755 0001750 0001750 00000002253 13370250152 017324 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "whitespace parser"
# the "word" parser unfortunatly treats everything except
# a SP as being in the word. So a HT inside a word is
# permitted, which does not work well with what we
# want to test here. to solve this problem, we use op-quoted-string.
# However, we must actually quote the samples with HT, because
# that parser also treats HT as being part of the word. But thanks
# to the quotes, we can force it to not do that.
# rgerhards, 2015-04-30
add_rule 'version=2'
add_rule 'rule=:%{"name":"a", "type":"op-quoted-string"}%%{"name":"-", "type":"whitespace"}%%{"name":"b", "type":"op-quoted-string"}%'
execute 'word1 word2' # multiple spaces
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute 'word1 word2' # single space
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute '"word1" "word2"' # tab (US-ASCII HT)
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
execute '"word1" "word2"' # mix of tab and spaces
assert_output_json_eq '{ "b": "word2", "a": "word1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_rest_jsoncnf.sh 0000755 0001750 0001750 00000004416 13370250152 016150 0000000 0000000 #!/bin/bash
# some more tests for the "rest" motif, especially to ensure that
# "rest" will not interfere with more specific rules.
# added 2015-04-27
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "rest matches"
#tail recursion with default tail field
add_rule 'version=2'
add_rule 'rule=:%{"name":"iface", "type":"char-to", "extradata":":"}%:%{"name":"ip", "type":"ipv4"}%/%{"name":"port", "type":"number"}% (%{"name":"label2", "type":"char-to", "extradata":")"}%)'
add_rule 'rule=:%{"name":"iface", "type":"char-to", "extradata":":"}%:%{"name":"ip", "type":"ipv4"}%/%{"name":"port", "type":"number"}% (%{"name":"label2", "type":"char-to", "extradata":")"}%)%{"name":"tail", "type":"rest"}%'
add_rule 'rule=:%{"name":"iface", "type":"char-to", "extradata":":"}%:%{"name":"ip", "type":"ipv4"}%/%{"name":"port", "type":"number"}%'
add_rule 'rule=:%{"name":"iface", "type":"char-to", "extradata":":"}%:%{"name":"ip", "type":"ipv4"}%/%{"name":"port", "type":"number"}%%{"name":"tail", "type":"rest"}%'
# real-world cisco samples
execute 'Outside:10.20.30.40/35 (40.30.20.10/35)'
assert_output_json_eq '{ "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 (40.30.20.10/35) with rest'
assert_output_json_eq '{ "tail": " with rest", "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 (40.30.20.10/35 brace missing'
assert_output_json_eq '{ "tail": " (40.30.20.10\/35 brace missing", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 40.30.20.10/35'
assert_output_json_eq '{ "tail": " 40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
#
# test expected mismatches
#
execute 'not at all!'
assert_output_json_eq '{ "originalmsg": "not at all!", "unparsed-data": "not at all!" }'
execute 'Outside 10.20.30.40/35 40.30.20.10/35'
assert_output_json_eq '{ "originalmsg": "Outside 10.20.30.40\/35 40.30.20.10\/35", "unparsed-data": "Outside 10.20.30.40\/35 40.30.20.10\/35" }'
execute 'Outside:10.20.30.40/aa 40.30.20.10/35'
assert_output_json_eq '{ "originalmsg": "Outside:10.20.30.40\/aa 40.30.20.10\/35", "unparsed-data": "aa 40.30.20.10\/35" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_rfc5424timestamp-fmt_timestamp-unix-ms.sh 0000755 0001750 0001750 00000002765 13370250152 022742 0000000 0000000 #!/bin/bash
# added 2017-10-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "RFC5424 timestamp in timestamp-unix format"
add_rule 'version=2'
add_rule 'rule=:here is a timestamp %{ "type":"date-rfc5424", "name":"num", "format":"timestamp-unix-ms"}% in RFC5424 format'
execute 'here is a timestamp 2000-03-11T14:15:16+01:00 in RFC5424 format'
assert_output_json_eq '{ "num": 952780516000}'
# with milliseconds (too-low precision)
execute 'here is a timestamp 2000-03-11T14:15:16.1+01:00 in RFC5424 format'
assert_output_json_eq '{ "num": 952780516100 }'
execute 'here is a timestamp 2000-03-11T14:15:16.12+01:00 in RFC5424 format'
assert_output_json_eq '{ "num": 952780516120 }'
# with milliseconds (exactly right precision)
execute 'here is a timestamp 2000-03-11T14:15:16.123+01:00 in RFC5424 format'
assert_output_json_eq '{ "num": 952780516123 }'
# with overdone precision
execute 'here is a timestamp 2000-03-11T14:15:16.1234+01:00 in RFC5424 format'
assert_output_json_eq '{ "num": 952780516123 }'
execute 'here is a timestamp 2000-03-11T14:15:16.123456789+01:00 in RFC5424 format'
assert_output_json_eq '{ "num": 952780516123 }'
#check cases where parsing failure must occur
execute 'here is a timestamp 2000-03-11T14:15:16+01:00in RFC5424 format'
assert_output_json_eq '{ "originalmsg": "here is a timestamp 2000-03-11T14:15:16+01:00in RFC5424 format", "unparsed-data": "2000-03-11T14:15:16+01:00in RFC5424 format" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_float_with_invalid_ruledef.sh 0000755 0001750 0001750 00000000527 13370250152 021026 0000000 0000000 #!/bin/bash
# added 2015-02-26 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "float with invalid field-declaration"
add_rule 'rule=:%no:flo% foo'
execute '10.0 foo'
assert_output_json_eq '{ "originalmsg": "10.0 foo", "unparsed-data": "10.0 foo" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_regex_while_regex_support_is_disabled.sh 0000755 0001750 0001750 00000000554 13370250152 023264 0000000 0000000 #!/bin/bash
# added 2014-11-14 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "field regex, while regex support is disabled"
add_rule 'rule=:%first:regex:[a-z]+%'
execute 'foo'
assert_output_contains '"originalmsg": "foo"'
assert_output_contains '"unparsed-data": "foo"'
cleanup_tmp_files
liblognorm-2.0.6/tests/literal.sh 0000755 0001750 0001750 00000001646 13370250152 013746 0000000 0000000 #!/bin/bash
# added 2016-12-21 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "using names with literal"
add_rule 'version=2'
add_rule 'rule=:%{"type":"literal", "text":"a", "name":"var"}%'
execute 'a'
assert_output_json_eq '{ "var": "a" }'
reset_rules
add_rule 'version=2'
add_rule 'rule=:Test %{"type":"literal", "text":"a", "name":"var"}%'
execute 'Test a'
assert_output_json_eq '{ "var": "a" }'
reset_rules
add_rule 'version=2'
add_rule 'rule=:Test %{"type":"literal", "text":"a", "name":"var"}% End'
execute 'Test a End'
assert_output_json_eq '{ "var": "a" }'
reset_rules
add_rule 'version=2'
add_rule 'rule=:a %[{"name":"num", "type":"number"}, {"name":"colon", "type":"literal", "text":":"}, {"name":"hex", "type":"hexnumber"}]% b'
execute 'a 4711:0x4712 b'
assert_output_json_eq '{ "hex": "0x4712", "colon": ":", "num": "4711" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_duration_jsoncnf.sh 0000755 0001750 0001750 00000001500 13370250152 017007 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "duration syntax"
add_rule 'version=2'
add_rule 'rule=:duration %{"name":"field", "type":"duration"}% bytes'
add_rule 'rule=:duration %{"name":"field", "type":"duration"}%'
execute 'duration 0:00:42 bytes'
assert_output_json_eq '{"field": "0:00:42"}'
execute 'duration 0:00:42'
assert_output_json_eq '{"field": "0:00:42"}'
execute 'duration 9:00:42 bytes'
assert_output_json_eq '{"field": "9:00:42"}'
execute 'duration 00:00:42 bytes'
assert_output_json_eq '{"field": "00:00:42"}'
execute 'duration 37:59:42 bytes'
assert_output_json_eq '{"field": "37:59:42"}'
execute 'duration 37:60:42 bytes'
assert_output_contains '"unparsed-data": "37:60:42 bytes"'
cleanup_tmp_files
liblognorm-2.0.6/tests/usrdef_two.sh 0000755 0001750 0001750 00000001200 13370250152 014455 0000000 0000000 #!/bin/bash
# added 2015-10-30 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "user-defined type with two alternatives"
add_rule 'version=2'
add_rule 'type=@hex-byte:%f1:hexnumber{"maxval": "255"}%'
add_rule 'type=@hex-byte:%f1:word%'
add_rule 'rule=:a word %w1:word% a byte % .:@hex-byte % another word %w2:word%'
execute 'a word w1 a byte 0xff another word w2'
assert_output_json_eq '{ "w2": "w2", "f1": "0xff", "w1": "w1" }'
execute 'a word w1 a byte TEST another word w2'
assert_output_json_eq '{ "w2": "w2", "f1": "TEST", "w1": "w1" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/include_RULEBASES.sh 0000755 0001750 0001750 00000001175 13370250152 015337 0000000 0000000 #!/bin/bash
# added 2015-08-28 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "include (via LIBLOGNORM_RULEBASES directory)"
reset_rules
add_rule 'version=2'
add_rule 'include=inc.rulebase'
reset_rules inc
add_rule 'version=2' inc
add_rule 'rule=:%field:mac48%' inc
mv inc.rulebase /tmp
export LIBLOGNORM_RULEBASES=/tmp
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
export LIBLOGNORM_RULEBASES=/tmp/
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
rm /tmp/inc.rulebase
cleanup_tmp_files
liblognorm-2.0.6/tests/options.sh.in 0000644 0001750 0001750 00000000331 13273030617 014401 0000000 0000000 use_valgrind=@VALGRIND@
echo "Using valgrind: $use_valgrind"
if [ $use_valgrind == "yes" ]; then
cmd="valgrind --error-exitcode=191 --malloc-fill=ff --free-fill=fe --leak-check=full --trace-children=yes $cmd"
fi
liblognorm-2.0.6/tests/field_hexnumber_v1.sh 0000755 0001750 0001750 00000001063 13370250152 016051 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "hexnumber field"
add_rule 'rule=:here is a number %num:hexnumber% in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/runaway_rule_comment.sh 0000755 0001750 0001750 00000001231 13370250152 016537 0000000 0000000 #!/bin/bash
# added 2015-09-16 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
# Note that this test produces an error message, as it encouters the
# runaway rule. This is OK and actually must happen. The prime point
# of the test is that it correctly loads the second rule, which
# would otherwise be consumed by the runaway rule.
. $srcdir/exec.sh
test_def $0 "runaway rule with comment lines (v2)"
reset_rules
add_rule 'version=2'
add_rule 'rule=:test %f1:word unmatched percent'
add_rule ''
add_rule '#comment'
add_rule 'rule=:%field:word%'
execute 'data'
assert_output_json_eq '{"field": "data"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/repeat_mismatch_in_while.sh 0000755 0001750 0001750 00000005135 13370250152 017332 0000000 0000000 #!/bin/bash
# added 2015-08-26 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
# This is based on a practical support case, see
# https://github.com/rsyslog/liblognorm/issues/130
. $srcdir/exec.sh
test_def $0 "repeat with mismatch in parser part"
reset_rules
add_rule 'version=2'
add_rule 'prefix=%timestamp:date-rfc3164% %hostname:word%'
add_rule 'rule=cisco,fwblock: \x25ASA-6-106015\x3a Deny %proto:word% (no connection) from %source:cisco-interface-spec% to %dest:cisco-interface-spec% flags %flags:repeat{ "parser": {"type":"word", "name":"."}, "while":{"type":"literal", "text":" "} }% on interface %srciface:word%'
echo step 1
execute 'Aug 18 13:18:45 192.168.99.2 %ASA-6-106015: Deny TCP (no connection) from 173.252.88.66/443 to 76.79.249.222/52746 flags RST on interface outside'
assert_output_json_eq '{ "originalmsg": "Aug 18 13:18:45 192.168.99.2 %ASA-6-106015: Deny TCP (no connection) from 173.252.88.66\/443 to 76.79.249.222\/52746 flags RST on interface outside", "unparsed-data": "RST on interface outside" }'
# now check case where we permit a mismatch inside the parser part and still
# accept this as valid. This is needed for some use cases. See github
# issue mentioned above for more details.
# Note: there is something odd with the testbench driver: I cannot use two
# consequtiuve spaces
reset_rules
add_rule 'version=2'
add_rule 'prefix=%timestamp:date-rfc3164% %hostname:word%'
add_rule 'rule=cisco,fwblock: \x25ASA-6-106015\x3a Deny %proto:word% (no connection) from %source:cisco-interface-spec% to %dest:cisco-interface-spec% flags %flags:repeat{ "option.permitMismatchInParser":true, "parser": {"type":"word", "name":"."}, "while":{"type":"literal", "text":" "} }%\x20 on interface %srciface:word%'
echo step 2
execute 'Aug 18 13:18:45 192.168.99.2 %ASA-6-106015: Deny TCP (no connection) from 173.252.88.66/443 to 76.79.249.222/52746 flags RST on interface outside'
assert_output_json_eq '{ "srciface": "outside", "flags": [ "RST" ], "dest": { "ip": "76.79.249.222", "port": "52746" }, "source": { "ip": "173.252.88.66", "port": "443" }, "proto": "TCP", "hostname": "192.168.99.2", "timestamp": "Aug 18 13:18:45" }'
echo step 3
execute 'Aug 18 13:18:45 192.168.99.2 %ASA-6-106015: Deny TCP (no connection) from 173.252.88.66/443 to 76.79.249.222/52746 flags RST XST on interface outside'
assert_output_json_eq '{ "srciface": "outside", "flags": [ "RST", "XST" ], "dest": { "ip": "76.79.249.222", "port": "52746" }, "source": { "ip": "173.252.88.66", "port": "443" }, "proto": "TCP", "hostname": "192.168.99.2", "timestamp": "Aug 18 13:18:45" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_rest.sh 0000755 0001750 0001750 00000003574 13370250152 014434 0000000 0000000 #!/bin/bash
# some more tests for the "rest" motif, especially to ensure that
# "rest" will not interfere with more specific rules.
# added 2015-04-27
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "rest matches"
#tail recursion with default tail field
add_rule 'version=2'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number% (%label2:char-to:)%)'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number% (%label2:char-to:)%)%tail:rest%'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number%'
add_rule 'rule=:%iface:char-to:\x3a%\x3a%ip:ipv4%/%port:number%%tail:rest%'
# real-world cisco samples
execute 'Outside:10.20.30.40/35 (40.30.20.10/35)'
assert_output_json_eq '{ "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 (40.30.20.10/35) with rest'
assert_output_json_eq '{ "tail": " with rest", "label2": "40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 (40.30.20.10/35 brace missing'
assert_output_json_eq '{ "tail": " (40.30.20.10\/35 brace missing", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
execute 'Outside:10.20.30.40/35 40.30.20.10/35'
assert_output_json_eq '{ "tail": " 40.30.20.10\/35", "port": "35", "ip": "10.20.30.40", "iface": "Outside" }'
#
# test expected mismatches
#
execute 'not at all!'
assert_output_json_eq '{ "originalmsg": "not at all!", "unparsed-data": "not at all!" }'
execute 'Outside 10.20.30.40/35 40.30.20.10/35'
assert_output_json_eq '{ "originalmsg": "Outside 10.20.30.40\/35 40.30.20.10\/35", "unparsed-data": "Outside 10.20.30.40\/35 40.30.20.10\/35" }'
execute 'Outside:10.20.30.40/aa 40.30.20.10/35'
assert_output_json_eq '{ "originalmsg": "Outside:10.20.30.40\/aa 40.30.20.10\/35", "unparsed-data": "aa 40.30.20.10\/35" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/alternative_segfault.sh 0000755 0001750 0001750 00000001277 13370250152 016522 0000000 0000000 #!/bin/bash
# added 2016-10-17 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "a case that caused a segfault in practice"
add_rule 'version=2'
add_rule 'rule=:%host:ipv4% %{"type":"alternative","parser":[{"type":"literal","text":"-"},{"type":"number","name":"identd"}]}% %OK:word%'
execute '1.2.3.4 - TEST_OK'
assert_output_json_eq '{ "OK": "TEST_OK", "host": "1.2.3.4" }'
execute '1.2.3.4 100 TEST_OK'
assert_output_json_eq '{ "OK": "TEST_OK", "identd": "100", "host": "1.2.3.4" }'
execute '1.2.3.4 ERR TEST_OK'
assert_output_json_eq '{ "originalmsg": "1.2.3.4 ERR TEST_OK", "unparsed-data": "ERR TEST_OK" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_suffixed_with_invalid_ruledef.sh 0000755 0001750 0001750 00000004313 13370250152 021533 0000000 0000000 #!/bin/bash
# added 2015-02-26 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "field with one of many possible suffixes, but invalid ruledef"
add_rule 'rule=:reclaimed %eden_free:suffixe:,:b,kb,mb,gb:number% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:kb,mb% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:kb,mb% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:,:% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:,:kb,mb% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:,:kb,mb:% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:,:kb,mb:floa% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
reset_rules
add_rule 'rule=:reclaimed %eden_free:suffixed:,:kb,m:b:floa% eden'
execute 'reclaimed 559mb eden'
assert_output_json_eq '{ "originalmsg": "reclaimed 559mb eden", "unparsed-data": "559mb eden" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cef_v1.sh 0000755 0001750 0001750 00000023100 13370250152 014605 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "CEF parser"
add_rule 'rule=:%f:cef%'
# fabricated tests to test specific functionality
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product\|1\|\\|Version|Signature ID|some name|Severity| aa=field1 bb=this is a name\=value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product|1|\\", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a name=value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \= value cc=field 3'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "aa": "field1", "bb": "this is a = value", "cc": "field 3" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity|'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=value'
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "name": "value" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=val\nue' # embedded LF
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { "name": "val\nue" } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value' #invalid punctuation in extension
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| n,me=value" }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\alue' #invalid escape in extension
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\\alue", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| name=v\\alue" }'
execute 'CEF:0|V\endor|Product|Version|Signature ID|some name|Severity| name=value' #invalid escape in header
assert_output_json_eq '{ "originalmsg": "CEF:0|V\\endor|Product|Version|Signature ID|some name|Severity| name=value", "unparsed-data": "CEF:0|V\\endor|Product|Version|Signature ID|some name|Severity| name=value" }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ' # single trailing space - valid
assert_output_json_eq '{ "f": { "DeviceVendor": "Vendor", "DeviceProduct": "Product", "DeviceVersion": "Version", "SignatureID": "Signature ID", "Name": "some name", "Severity": "Severity", "Extensions": { } } }'
execute 'CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ' # multiple trailing spaces - invalid
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| ", "unparsed-data": "CEF:0|Vendor|Product|Version|Signature ID|some name|Severity| " }'
execute 'CEF:0|Vendor'
assert_output_json_eq '{ "originalmsg": "CEF:0|Vendor", "unparsed-data": "CEF:0|Vendor" }'
execute 'CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \= value cc=field 3'
assert_output_json_eq '{ "originalmsg": "CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \\= value cc=field 3", "unparsed-data": "CEF:1|Vendor|Product|Version|Signature ID|some name|Severity| aa=field1 bb=this is a \\= value cc=field 3" }'
execute ''
assert_output_json_eq '{ "originalmsg": "", "unparsed-data": "" }'
# finally, a use case from practice
execute 'CEF:0|ArcSight|ArcSight|10.0.0.15.0|rule:101|FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt|High| eventId=24934046519 type=2 mrt=8888882444085 sessionId=0 generatorID=34rSQWFOOOCAVlswcKFkbA\=\= categorySignificance=/Normal categoryBehavior=/Execute/Query categoryDeviceGroup=/Application categoryOutcome=/Success categoryObject=/Host/Application modelConfidence=0 severity=0 relevance=10 assetCriticality=0 priority=2 art=1427882454263 cat=/Detection/FOO/UNIX/Direct Root Connection Attempt deviceSeverity=Warning rt=1427881661000 shost=server.foo.bar src=10.0.0.1 sourceZoneID=MRL4p30sFOOO8panjcQnFbw\=\= sourceZoneURI=/All Zones/FOO Solutions/Server Subnet/UK/PAR-WDC-12-CELL5-PROD S2U 1 10.0.0.1-10.0.0.1 sourceGeoCountryCode=GB sourceGeoLocationInfo=London slong=-0.90843 slat=51.9039 dhost=server.foo.bar dst=10.0.0.1 destinationZoneID=McFOOO0sBABCUHR83pKJmQA\=\= destinationZoneURI=/All Zones/FOO Solutions/Prod/AMERICAS/FOO 10.0.0.1-10.0.0.1 duser=johndoe destinationGeoCountryCode=US destinationGeoLocationInfo=Jersey City dlong=-90.0435 dlat=30.732 fname=FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt filePath=/All Rules/Real-time Rules/ACBP-ACCESS CONTROL and AUTHORIZATION/FOO/Unix Server/FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt fileType=Rule ruleThreadId=NQVtdFOOABDrKsmLWpyq8g\=\= cs2= flexString2=DC0001-988 locality=1 cs2Label=Configuration Resource ahost=foo.bar agt=10.0.0.1 av=10.0.0.12 atz=Europe/Berlin aid=34rSQWFOOOBCAVlswcKFkbA\=\= at=superagent_ng dvchost=server.foo.bar dvc=10.0.0.1 deviceZoneID=Mbb8pFOOODol1dBKdURJA\=\= deviceZoneURI=/All Zones/FOO Solutions/Prod/GERMANY/FOO US2 Class6 A 508 10.0.0.1-10.0.0.1 dtz=Europe/Berlin deviceFacility=Rules Engine eventAnnotationStageUpdateTime=1427882444192 eventAnnotationModificationTime=1427882444192 eventAnnotationAuditTrail=1,1427453188050,root,Queued,,,,\n eventAnnotationVersion=1 eventAnnotationFlags=0 eventAnnotationEndTime=1427881661000 eventAnnotationManagerReceiptTime=1427882444085 _cefVer=0.1 ad.arcSightEventPath=3VcygrkkBABCAYFOOLlU13A\=\= baseEventIds=24934003731"'
assert_output_json_eq '{ "f": { "DeviceVendor": "ArcSight", "DeviceProduct": "ArcSight", "DeviceVersion": "10.0.0.15.0", "SignatureID": "rule:101", "Name": "FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "Severity": "High", "Extensions": { "eventId": "24934046519", "type": "2", "mrt": "8888882444085", "sessionId": "0", "generatorID": "34rSQWFOOOCAVlswcKFkbA==", "categorySignificance": "\/Normal", "categoryBehavior": "\/Execute\/Query", "categoryDeviceGroup": "\/Application", "categoryOutcome": "\/Success", "categoryObject": "\/Host\/Application", "modelConfidence": "0", "severity": "0", "relevance": "10", "assetCriticality": "0", "priority": "2", "art": "1427882454263", "cat": "\/Detection\/FOO\/UNIX\/Direct Root Connection Attempt", "deviceSeverity": "Warning", "rt": "1427881661000", "shost": "server.foo.bar", "src": "10.0.0.1", "sourceZoneID": "MRL4p30sFOOO8panjcQnFbw==", "sourceZoneURI": "\/All Zones\/FOO Solutions\/Server Subnet\/UK\/PAR-WDC-12-CELL5-PROD S2U 1 10.0.0.1-10.0.0.1", "sourceGeoCountryCode": "GB", "sourceGeoLocationInfo": "London", "slong": "-0.90843", "slat": "51.9039", "dhost": "server.foo.bar", "dst": "10.0.0.1", "destinationZoneID": "McFOOO0sBABCUHR83pKJmQA==", "destinationZoneURI": "\/All Zones\/FOO Solutions\/Prod\/AMERICAS\/FOO 10.0.0.1-10.0.0.1", "duser": "johndoe", "destinationGeoCountryCode": "US", "destinationGeoLocationInfo": "Jersey City", "dlong": "-90.0435", "dlat": "30.732", "fname": "FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "filePath": "\/All Rules\/Real-time Rules\/ACBP-ACCESS CONTROL and AUTHORIZATION\/FOO\/Unix Server\/FOO-UNIX-Bypassing Golden Host-Direct Root Connection Attempt", "fileType": "Rule", "ruleThreadId": "NQVtdFOOABDrKsmLWpyq8g==", "cs2": "", "flexString2": "DC0001-988", "locality": "1", "cs2Label": "Configuration Resource", "ahost": "foo.bar", "agt": "10.0.0.1", "av": "10.0.0.12", "atz": "Europe\/Berlin", "aid": "34rSQWFOOOBCAVlswcKFkbA==", "at": "superagent_ng", "dvchost": "server.foo.bar", "dvc": "10.0.0.1", "deviceZoneID": "Mbb8pFOOODol1dBKdURJA==", "deviceZoneURI": "\/All Zones\/FOO Solutions\/Prod\/GERMANY\/FOO US2 Class6 A 508 10.0.0.1-10.0.0.1", "dtz": "Europe\/Berlin", "deviceFacility": "Rules Engine", "eventAnnotationStageUpdateTime": "1427882444192", "eventAnnotationModificationTime": "1427882444192", "eventAnnotationAuditTrail": "1,1427453188050,root,Queued,,,,\n", "eventAnnotationVersion": "1", "eventAnnotationFlags": "0", "eventAnnotationEndTime": "1427881661000", "eventAnnotationManagerReceiptTime": "1427882444085", "_cefVer": "0.1", "ad.arcSightEventPath": "3VcygrkkBABCAYFOOLlU13A==", "baseEventIds": "24934003731\"" } } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_duration_v1.sh 0000755 0001750 0001750 00000001416 13370250152 015703 0000000 0000000 #!/bin/bash
# added 2015-03-12 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "duration syntax"
add_rule 'rule=:duration %field:duration% bytes'
add_rule 'rule=:duration %field:duration%'
execute 'duration 0:00:42 bytes'
assert_output_json_eq '{"field": "0:00:42"}'
execute 'duration 0:00:42'
assert_output_json_eq '{"field": "0:00:42"}'
execute 'duration 9:00:42 bytes'
assert_output_json_eq '{"field": "9:00:42"}'
execute 'duration 00:00:42 bytes'
assert_output_json_eq '{"field": "00:00:42"}'
execute 'duration 37:59:42 bytes'
assert_output_json_eq '{"field": "37:59:42"}'
execute 'duration 37:60:42 bytes'
assert_output_contains '"unparsed-data": "37:60:42 bytes"'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_checkpoint-lea-terminator.sh 0000755 0001750 0001750 00000000657 13370250152 020526 0000000 0000000 #!/bin/bash
# added 2018-10-31 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "Checkpoint LEA parser"
add_rule 'version=2'
add_rule 'rule=:[ %{"name":"f", "type":"checkpoint-lea", "terminator": "]"}%]'
execute '[ tcp_flags: RST-ACK; src: 192.168.0.1; ]'
assert_output_json_eq '{ "f": { "tcp_flags": "RST-ACK", "src": "192.168.0.1" } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/repeat_simple.sh 0000755 0001750 0001750 00000001252 13370250152 015134 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "simple repeat syntax"
add_rule 'version=2'
add_rule 'rule=:a %{"name":"numbers", "type":"repeat",
"parser":[
{"name":"n1", "type":"number"},
{"type":"literal", "text":":"},
{"name":"n2", "type":"number"}
],
"while":[
{"type":"literal", "text":", "}
]
}% b %w:word%
'
execute 'a 1:2, 3:4, 5:6, 7:8 b test'
assert_output_json_eq '{ "w": "test", "numbers": [ { "n2": "2", "n1": "1" }, { "n2": "4", "n1": "3" }, { "n2": "6", "n1": "5" }, { "n2": "8", "n1": "7" } ] }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_float-fmt_number.sh 0000755 0001750 0001750 00000001653 13370250152 016714 0000000 0000000 #!/bin/bash
# added 2017-10-02 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "float field"
add_rule 'version=2'
add_rule 'rule=:here is a number %{ "type":"float", "name":"num", "format":"number"}% in floating pt form'
execute 'here is a number 15.9 in floating pt form'
assert_output_json_eq '{"num": 15.9}'
reset_rules
# note: floating point numbers are tricky to get right, even more so if negative.
add_rule 'version=2'
add_rule 'rule=:here is a negative number %{ "type":"float", "name":"num", "format":"number"}% for you'
execute 'here is a negative number -4.2 for you'
assert_output_json_eq '{"num": -4.2}'
reset_rules
add_rule 'version=2'
add_rule 'rule=:here is another real number %{ "type":"float", "name":"num", "format":"number"}%.'
execute 'here is another real number 2.71.'
assert_output_json_eq '{"num": 2.71}'
cleanup_tmp_files
liblognorm-2.0.6/tests/repeat_while_alternative.sh 0000755 0001750 0001750 00000001453 13370250152 017354 0000000 0000000 #!/bin/bash
# added 2015-07-22 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "repeat syntax with alternative terminators"
add_rule 'version=2'
add_rule 'rule=:a %{"name":"numbers", "type":"repeat",
"parser":[
{"name":"n1", "type":"number"},
{"type":"literal", "text":":"},
{"name":"n2", "type":"number"}
],
"while": {
"type":"alternative", "parser": [
{"type":"literal", "text":", "},
{"type":"literal", "text":","}
]
}
}% b %w:word%
'
execute 'a 1:2, 3:4,5:6, 7:8 b test'
assert_output_json_eq '{ "w": "test", "numbers": [ { "n2": "2", "n1": "1" }, { "n2": "4", "n1": "3" }, { "n2": "6", "n1": "5" }, { "n2": "8", "n1": "7" } ] }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cee-syslog_v1.sh 0000755 0001750 0001750 00000001532 13370250152 016127 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "JSON field"
add_rule 'rule=:%field:cee-syslog%'
execute '@cee:{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee:{"f1": "1", "f2": 2} ' # note the trailing space
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee: {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '@cee: {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#
# Things that MUST NOT work
#
execute '@cee: {"f1": "1", "f2": 2} data'
assert_output_json_eq '{ "originalmsg": "@cee: {\"f1\": \"1\", \"f2\": 2} data", "unparsed-data": "@cee: {\"f1\": \"1\", \"f2\": 2} data" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_interpret_with_invalid_ruledef.sh 0000755 0001750 0001750 00000004675 13370250152 021745 0000000 0000000 #!/bin/bash
# added 2014-12-11 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "value interpreting field, with invalid ruledef"
add_rule 'rule=:%session_count:interpret:int:wd% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret:int:% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret:int% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret:in% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret:in:word% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret:in:wd% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret::word% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:interpret::% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:inter::% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
reset_rules
add_rule 'rule=:%session_count:inter:int:word% sessions established'
execute '64 sessions established'
assert_output_json_eq '{ "originalmsg": "64 sessions established", "unparsed-data": "64 sessions established" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_tokenized.sh 0000755 0001750 0001750 00000002340 13370250152 015441 0000000 0000000 #!/bin/bash
# added 2014-11-17 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "tokenized field"
add_rule 'rule=:%arr:tokenized: , :word% %more:rest%'
execute '123 , abc , 456 , def ijk789'
assert_output_contains '"arr": [ "123", "abc", "456", "def" ]'
assert_output_contains '"more": "ijk789"'
reset_rules
add_rule 'rule=:%ips:tokenized:, :ipv4% %text:rest%'
execute '10.20.30.40, 50.60.70.80, 90.100.110.120 are blocked'
assert_output_contains '"text": "are blocked"'
assert_output_contains '"ips": [ "10.20.30.40", "50.60.70.80", "90.100.110.120" ]'
reset_rules
add_rule 'rule=:comma separated list of colon separated list of # separated numbers: %some_nos:tokenized:, :tokenized: \x3a :tokenized:#:number%'
execute 'comma separated list of colon separated list of # separated numbers: 10, 20 : 30#40#50 : 60#70#80, 90 : 100'
assert_output_contains '"some_nos": [ [ [ "10" ] ], [ [ "20" ], [ "30", "40", "50" ], [ "60", "70", "80" ] ], [ [ "90" ], [ "100" ] ] ]'
reset_rules
add_rule 'rule=:%arr:tokenized:\x3a:number% %more:rest%'
execute '123:456:789 ijk789'
assert_output_json_eq '{"arr": [ "123", "456", "789" ], "more": "ijk789"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_cisco-interface-spec-at-EOL.sh 0000755 0001750 0001750 00000001124 13370250152 020451 0000000 0000000 #!/bin/bash
# added 2015-04-13 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "cisco-interface-spec-at-EOL syntax"
add_rule 'version=2'
add_rule 'rule=:begin %field:cisco-interface-spec%%r:rest%'
execute 'begin outside:192.0.2.1/50349 end'
assert_output_json_eq '{ "r": " end", "field": { "interface": "outside", "ip": "192.0.2.1", "port": "50349" } }'
execute 'begin outside:192.0.2.1/50349'
assert_output_json_eq '{ "r": "", "field": { "interface": "outside", "ip": "192.0.2.1", "port": "50349" } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_ipv6_v1.sh 0000755 0001750 0001750 00000004326 13370250152 014745 0000000 0000000 #!/bin/bash
# added 2015-06-23 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "IPv6 parser"
add_rule 'rule=:%f:ipv6%'
# examples from RFC4291, sect. 2.2
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345:6789'
assert_output_json_eq '{ "f": "ABCD:EF01:2345:6789:ABCD:EF01:2345:6789" }'
execute 'ABCD:EF01:2345:6789:abcd:EF01:2345:6789' # mixed hex case
assert_output_json_eq '{ "f": "ABCD:EF01:2345:6789:abcd:EF01:2345:6789" }'
execute '2001:DB8:0:0:8:800:200C:417A'
assert_output_json_eq '{ "f": "2001:DB8:0:0:8:800:200C:417A" }'
execute '0:0:0:0:0:0:0:1'
assert_output_json_eq '{ "f": "0:0:0:0:0:0:0:1" }'
execute '2001:DB8::8:800:200C:417A'
assert_output_json_eq '{ "f": "2001:DB8::8:800:200C:417A" }'
execute 'FF01::101'
assert_output_json_eq '{ "f": "FF01::101" }'
execute '::1'
assert_output_json_eq '{ "f": "::1" }'
execute '::'
assert_output_json_eq '{ "f": "::" }'
execute '0:0:0:0:0:0:13.1.68.3'
assert_output_json_eq '{ "f": "0:0:0:0:0:0:13.1.68.3" }'
execute '::13.1.68.3'
assert_output_json_eq '{ "f": "::13.1.68.3" }'
execute '::FFFF:129.144.52.38'
assert_output_json_eq '{ "f": "::FFFF:129.144.52.38" }'
# invalid samples
execute '2001:DB8::8::800:200C:417A' # two :: sequences
assert_output_json_eq '{ "originalmsg": "2001:DB8::8::800:200C:417A", "unparsed-data": "2001:DB8::8::800:200C:417A" }'
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345::6789' # :: with too many blocks
assert_output_json_eq '{ "originalmsg": "ABCD:EF01:2345:6789:ABCD:EF01:2345::6789", "unparsed-data": "ABCD:EF01:2345:6789:ABCD:EF01:2345::6789" }'
execute 'ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798' # too many blocks (9)
assert_output_json_eq '{"originalmsg": "ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798", "unparsed-data": "ABCD:EF01:2345:6789:ABCD:EF01:2345:1:6798" }'
execute ':0:0:0:0:0:0:1' # missing first digit
assert_output_json_eq '{ "originalmsg": ":0:0:0:0:0:0:1", "unparsed-data": ":0:0:0:0:0:0:1" }'
execute '0:0:0:0:0:0:0:' # missing last digit
assert_output_json_eq '{ "originalmsg": "0:0:0:0:0:0:0:", "unparsed-data": "0:0:0:0:0:0:0:" }'
execute '13.1.68.3' # pure IPv4 address
assert_output_json_eq '{ "originalmsg": "13.1.68.3", "unparsed-data": "13.1.68.3" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/parser_LF_jsoncnf.sh 0000755 0001750 0001750 00000001303 13370250152 015675 0000000 0000000 #!/bin/bash
# added 2015-07-15 by Rainer Gerhards
# This checks if whitespace inside parser definitions is properly treated
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "LF in parser definition"
add_rule 'version=2'
add_rule 'rule=:here is a number %{
"name":"num", "type":"hexnumber"
}% in hex form'
execute 'here is a number 0x1234 in hex form'
assert_output_json_eq '{"num": "0x1234"}'
#check cases where parsing failure must occur
execute 'here is a number 0x1234in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0x1234in hex form", "unparsed-data": "0x1234in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_interpret.sh 0000755 0001750 0001750 00000004571 13370250152 015471 0000000 0000000 #!/bin/bash
# added 2014-12-11 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "value interpreting field"
add_rule 'rule=:%session_count:interpret:int:word% sessions established'
execute '64 sessions established'
assert_output_json_eq '{"session_count": 64}'
reset_rules
add_rule 'rule=:max sessions limit reached: %at_limit:interpret:bool:word%'
execute 'max sessions limit reached: true'
assert_output_json_eq '{"at_limit": true}'
execute 'max sessions limit reached: false'
assert_output_json_eq '{"at_limit": false}'
execute 'max sessions limit reached: TRUE'
assert_output_json_eq '{"at_limit": true}'
execute 'max sessions limit reached: FALSE'
assert_output_json_eq '{"at_limit": false}'
execute 'max sessions limit reached: yes'
assert_output_json_eq '{"at_limit": true}'
execute 'max sessions limit reached: no'
assert_output_json_eq '{"at_limit": false}'
execute 'max sessions limit reached: YES'
assert_output_json_eq '{"at_limit": true}'
execute 'max sessions limit reached: NO'
assert_output_json_eq '{"at_limit": false}'
reset_rules
add_rule 'rule=:record count for shard [%shard:interpret:base16int:char-to:]%] is %record_count:interpret:base10int:number% and %latency_percentile:interpret:float:char-to:\x25%\x25ile latency is %latency:interpret:float:word% %latency_unit:word%'
execute 'record count for shard [3F] is 50000 and 99.99%ile latency is 2.1 seconds'
assert_output_json_eq '{"shard": 63, "record_count": 50000, "latency_percentile": 99.99, "latency": 2.1, "latency_unit" : "seconds"}'
reset_rules
add_rule 'rule=:%latency_percentile:interpret:float:char-to:\x25%\x25ile latency is %latency:interpret:float:word%'
execute '98.1%ile latency is 1.999123'
assert_output_json_eq '{"latency_percentile": 98.1, "latency": 1.999123}'
reset_rules
add_rule 'rule=:%latency_percentile:interpret:float:number%'
add_rule 'rule=:%latency_percentile:interpret:int:number%'
add_rule 'rule=:%latency_percentile:interpret:base16int:number%'
add_rule 'rule=:%latency_percentile:interpret:base10int:number%'
add_rule 'rule=:%latency_percentile:interpret:boolean:number%'
execute 'foo'
assert_output_json_eq '{ "originalmsg": "foo", "unparsed-data": "foo" }'
reset_rules
add_rule 'rule=:gc pause: %pause_time:interpret:float:float%ms'
execute 'gc pause: 12.3ms'
assert_output_json_eq '{"pause_time": 12.3}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_recursive.sh 0000755 0001750 0001750 00000005675 13370250152 015472 0000000 0000000 #!/bin/bash
# added 2014-11-26 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "recursive parsing field"
#tail recursion with default tail field
add_rule 'rule=:%word:word% %next:recursive%'
add_rule 'rule=:%word:word%'
execute '123 abc 456 def'
assert_output_json_eq '{"word": "123", "next": {"word": "abc", "next": {"word": "456", "next" : {"word": "def"}}}}'
#tail recursion with explicitly named 'tail' field
reset_rules
add_rule 'rule=:%word:word% %next:recursive:tail%'
add_rule 'rule=:%word:word%'
execute '123 abc 456 def'
assert_output_json_eq '{"word": "123", "next": {"word": "abc", "next": {"word": "456", "next" : {"word": "def"}}}}'
#tail recursion with tail field having arbirary name
reset_rules
add_rule 'rule=:%word:word% %next:recursive:foo%'
add_rule 'rule=:%word:word%'
execute '123 abc 456 def'
assert_output_json_eq '{"word": "123", "next": {"word": "abc", "next": {"word": "456", "next" : {"word": "def"}}}}'
#non tail recursion with default tail field
reset_rules
add_rule 'rule=:blocked on %device:word% %net:recursive%at %tm:date-rfc5424%'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %tail:rest%'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
#non tail recursion with tail field being explicitly named 'tail'
reset_rules
add_rule 'rule=:blocked on %device:word% %net:recursive:tail%at %tm:date-rfc5424%'
add_rule 'rule=:%ip_addr:ipv4% %tail:rest%'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %tail:rest%'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
#non tail recursion with tail field having arbirary name
reset_rules
add_rule 'rule=:blocked on %device:word% %net:recursive:remaining%at %tm:date-rfc5424%'
add_rule 'rule=:%ip_addr:ipv4% %remaining:rest%'
add_rule 'rule=:%subnet_addr:ipv4%/%mask:number% %remaining:rest%'
execute 'blocked on gw-1 10.20.30.40 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"ip_addr": "10.20.30.40"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
execute 'blocked on gw-1 10.20.30.40/16 at 2014-12-08T08:53:33.05+05:30'
assert_output_json_eq '{"device": "gw-1", "net": {"subnet_addr": "10.20.30.40", "mask": "16"}, "tm": "2014-12-08T08:53:33.05+05:30"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_hexnumber_range_jsoncnf.sh 0000755 0001750 0001750 00000001420 13370250152 020334 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "hexnumber field with range checks"
add_rule 'version=2'
add_rule 'rule=:here is a number %{"name":"num", "type":"hexnumber", "maxval":191}% in hex form'
execute 'here is a number 0x12 in hex form'
assert_output_json_eq '{"num": "0x12"}'
execute 'here is a number 0x0 in hex form'
assert_output_json_eq '{"num": "0x0"}'
execute 'here is a number 0xBf in hex form'
assert_output_json_eq '{"num": "0xBf"}'
#check cases where parsing failure must occur
execute 'here is a number 0xc0 in hex form'
assert_output_json_eq '{ "originalmsg": "here is a number 0xc0 in hex form", "unparsed-data": "0xc0 in hex form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_mac48_jsoncnf.sh 0000755 0001750 0001750 00000001322 13370250152 016100 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "dmac48 syntax"
add_rule 'version=2'
add_rule 'rule=:%{"name":"field", "type":"mac48"}%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"field": "f0-f6-1c-5f-cc-a2"}'
# things that need to NOT match
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "originalmsg": "f0-f6:1c:5f:cc-a2", "unparsed-data": "f0-f6:1c:5f:cc-a2" }'
execute 'f0:f6:1c:xf:cc:a2'
assert_output_json_eq '{ "originalmsg": "f0:f6:1c:xf:cc:a2", "unparsed-data": "f0:f6:1c:xf:cc:a2" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_regex_default_group_parse_and_return.sh 0000755 0001750 0001750 00000000655 13370250152 023121 0000000 0000000 #!/bin/bash
# added 2014-11-14 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
export ln_opts='-oallowRegex'
test_def $0 "type ERE for regex field"
add_rule 'rule=:%first:regex:[a-z]+% %second:regex:\d+\x25\x3a[a-f0-9]+\x25%'
execute 'foo 122%:7a%'
assert_output_contains '"first": "foo"'
assert_output_contains '"second": "122%:7a%"'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_json.sh 0000755 0001750 0001750 00000004060 13370250152 014417 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "JSON field"
add_rule 'version=2'
add_rule 'rule=:%field:json%'
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# let's see if something more complicated still works, so ADD some
# more rules
add_rule 'rule=:begin %field:json%'
add_rule 'rule=:begin %field:json%end'
add_rule 'rule=:%field:json%end'
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#check if trailinge whitspace is ignored
execute '{"f1": "1", "f2": 2} '
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2}end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# note: the parser takes all whitespace after the JSON
# to be part of it!
execute 'begin {"f1": "1", "f2": 2} end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2} end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '{"f1": "1", "f2": 2}end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#check cases where parsing failure must occur
execute '{"f1": "1", f2: 2}'
assert_output_json_eq '{ "originalmsg": "{\"f1\": \"1\", f2: 2}", "unparsed-data": "{\"f1\": \"1\", f2: 2}" }'
#some more complex cases
add_rule 'rule=:%field1:json%-%field2:json%'
execute '{"f1": "1"}-{"f2": 2}'
assert_output_json_eq '{ "field2": { "f2": 2 }, "field1": { "f1": "1" } }'
# re-check previsous def still works
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# now check some strange cases
reset_rules
add_rule 'version=2'
add_rule 'rule=:%field:json%'
# this check is because of bug in json-c:
# https://github.com/json-c/json-c/issues/181
execute '15:00'
assert_output_json_eq '{ "originalmsg": "15:00", "unparsed-data": "15:00" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/missing_line_ending_v1.sh 0000755 0001750 0001750 00000001304 13370250152 016713 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "missing line ending (v1)"
reset_rules
add_rule_no_LF 'rule=:%field:mac48%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"field": "f0-f6-1c-5f-cc-a2"}'
# things that need to NOT match
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "originalmsg": "f0-f6:1c:5f:cc-a2", "unparsed-data": "f0-f6:1c:5f:cc-a2" }'
execute 'f0:f6:1c:xf:cc:a2'
assert_output_json_eq '{ "originalmsg": "f0:f6:1c:xf:cc:a2", "unparsed-data": "f0:f6:1c:xf:cc:a2" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_json_v1.sh 0000755 0001750 0001750 00000004006 13370250152 015025 0000000 0000000 #!/bin/bash
# added 2015-03-01 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "JSON field"
add_rule 'rule=:%field:json%'
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# let's see if something more complicated still works, so ADD some
# more rules
add_rule 'rule=:begin %field:json%'
add_rule 'rule=:begin %field:json%end'
add_rule 'rule=:%field:json%end'
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#check if trailinge whitspace is ignored
execute '{"f1": "1", "f2": 2} '
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2}end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# note: the parser takes all whitespace after the JSON
# to be part of it!
execute 'begin {"f1": "1", "f2": 2} end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute 'begin {"f1": "1", "f2": 2} end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
execute '{"f1": "1", "f2": 2}end'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
#check cases where parsing failure must occur
execute '{"f1": "1", f2: 2}'
assert_output_json_eq '{ "originalmsg": "{\"f1\": \"1\", f2: 2}", "unparsed-data": "{\"f1\": \"1\", f2: 2}" }'
#some more complex cases
add_rule 'rule=:%field1:json%-%field2:json%'
execute '{"f1": "1"}-{"f2": 2}'
assert_output_json_eq '{ "field2": { "f2": 2 }, "field1": { "f1": "1" } }'
# re-check previsous def still works
execute '{"f1": "1", "f2": 2}'
assert_output_json_eq '{ "field": { "f1": "1", "f2": 2 } }'
# now check some strange cases
reset_rules
add_rule 'rule=:%field:json%'
# this check is because of bug in json-c:
# https://github.com/json-c/json-c/issues/181
execute '15:00'
assert_output_json_eq '{ "originalmsg": "15:00", "unparsed-data": "15:00" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/field_checkpoint-lea_jsoncnf.sh 0000755 0001750 0001750 00000000625 13370250152 020057 0000000 0000000 #!/bin/bash
# added 2015-06-18 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "Checkpoint LEA parser"
add_rule 'version=2'
add_rule 'rule=:%{"name":"f", "type":"checkpoint-lea"}%'
execute 'tcp_flags: RST-ACK; src: 192.168.0.1;'
assert_output_json_eq '{ "f": { "tcp_flags": "RST-ACK", "src": "192.168.0.1" } }'
cleanup_tmp_files
liblognorm-2.0.6/tests/Makefile.in 0000644 0001750 0001750 00000231764 13370251155 014032 0000000 0000000 # Makefile.in generated by automake 1.15.1 from Makefile.am.
# @configure_input@
# Copyright (C) 1994-2017 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
VPATH = @srcdir@
am__is_gnu_make = { \
if test -z '$(MAKELEVEL)'; then \
false; \
elif test -n '$(MAKE_HOST)'; then \
true; \
elif test -n '$(MAKE_VERSION)' && test -n '$(CURDIR)'; then \
true; \
else \
false; \
fi; \
}
am__make_running_with_option = \
case $${target_option-} in \
?) ;; \
*) echo "am__make_running_with_option: internal error: invalid" \
"target option '$${target_option-}' specified" >&2; \
exit 1;; \
esac; \
has_opt=no; \
sane_makeflags=$$MAKEFLAGS; \
if $(am__is_gnu_make); then \
sane_makeflags=$$MFLAGS; \
else \
case $$MAKEFLAGS in \
*\\[\ \ ]*) \
bs=\\; \
sane_makeflags=`printf '%s\n' "$$MAKEFLAGS" \
| sed "s/$$bs$$bs[$$bs $$bs ]*//g"`;; \
esac; \
fi; \
skip_next=no; \
strip_trailopt () \
{ \
flg=`printf '%s\n' "$$flg" | sed "s/$$1.*$$//"`; \
}; \
for flg in $$sane_makeflags; do \
test $$skip_next = yes && { skip_next=no; continue; }; \
case $$flg in \
*=*|--*) continue;; \
-*I) strip_trailopt 'I'; skip_next=yes;; \
-*I?*) strip_trailopt 'I';; \
-*O) strip_trailopt 'O'; skip_next=yes;; \
-*O?*) strip_trailopt 'O';; \
-*l) strip_trailopt 'l'; skip_next=yes;; \
-*l?*) strip_trailopt 'l';; \
-[dEDm]) skip_next=yes;; \
-[JT]) skip_next=yes;; \
esac; \
case $$flg in \
*$$target_option*) has_opt=yes; break;; \
esac; \
done; \
test $$has_opt = yes
am__make_dryrun = (target_option=n; $(am__make_running_with_option))
am__make_keepgoing = (target_option=k; $(am__make_running_with_option))
pkgdatadir = $(datadir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkglibexecdir = $(libexecdir)/@PACKAGE@
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
build_triplet = @build@
host_triplet = @host@
check_PROGRAMS = json_eq$(EXEEXT)
@ENABLE_REGEXP_TRUE@am__append_1 = $(REGEXP_TESTS)
subdir = tests
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \
$(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \
$(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
DIST_COMMON = $(srcdir)/Makefile.am $(am__DIST_COMMON)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES = options.sh
CONFIG_CLEAN_VPATH_FILES =
am__objects_1 = json_eq-json_eq.$(OBJEXT)
am_json_eq_OBJECTS = $(am__objects_1)
json_eq_OBJECTS = $(am_json_eq_OBJECTS)
am__DEPENDENCIES_1 =
json_eq_DEPENDENCIES = $(am__DEPENDENCIES_1)
AM_V_lt = $(am__v_lt_@AM_V@)
am__v_lt_ = $(am__v_lt_@AM_DEFAULT_V@)
am__v_lt_0 = --silent
am__v_lt_1 =
json_eq_LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(json_eq_LDFLAGS) $(LDFLAGS) -o $@
AM_V_P = $(am__v_P_@AM_V@)
am__v_P_ = $(am__v_P_@AM_DEFAULT_V@)
am__v_P_0 = false
am__v_P_1 = :
AM_V_GEN = $(am__v_GEN_@AM_V@)
am__v_GEN_ = $(am__v_GEN_@AM_DEFAULT_V@)
am__v_GEN_0 = @echo " GEN " $@;
am__v_GEN_1 =
AM_V_at = $(am__v_at_@AM_V@)
am__v_at_ = $(am__v_at_@AM_DEFAULT_V@)
am__v_at_0 = @
am__v_at_1 =
DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir)
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
am__mv = mv -f
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
LTCOMPILE = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=compile $(CC) $(DEFS) \
$(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) $(CPPFLAGS) \
$(AM_CFLAGS) $(CFLAGS)
AM_V_CC = $(am__v_CC_@AM_V@)
am__v_CC_ = $(am__v_CC_@AM_DEFAULT_V@)
am__v_CC_0 = @echo " CC " $@;
am__v_CC_1 =
CCLD = $(CC)
LINK = $(LIBTOOL) $(AM_V_lt) --tag=CC $(AM_LIBTOOLFLAGS) \
$(LIBTOOLFLAGS) --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
$(AM_LDFLAGS) $(LDFLAGS) -o $@
AM_V_CCLD = $(am__v_CCLD_@AM_V@)
am__v_CCLD_ = $(am__v_CCLD_@AM_DEFAULT_V@)
am__v_CCLD_0 = @echo " CCLD " $@;
am__v_CCLD_1 =
SOURCES = $(json_eq_SOURCES)
DIST_SOURCES = $(json_eq_SOURCES)
am__can_run_installinfo = \
case $$AM_UPDATE_INFO_DIR in \
n|no|NO) false;; \
*) (install-info --version) >/dev/null 2>&1;; \
esac
am__tagged_files = $(HEADERS) $(SOURCES) $(TAGS_FILES) $(LISP)
# Read a list of newline-separated strings from the standard input,
# and print each of them once, without duplicates. Input order is
# *not* preserved.
am__uniquify_input = $(AWK) '\
BEGIN { nonempty = 0; } \
{ items[$$0] = 1; nonempty = 1; } \
END { if (nonempty) { for (i in items) print i; }; } \
'
# Make sure the list of sources is unique. This is necessary because,
# e.g., the same source file might be shared among _SOURCES variables
# for different programs/libraries.
am__define_uniq_tagged_files = \
list='$(am__tagged_files)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | $(am__uniquify_input)`
ETAGS = etags
CTAGS = ctags
am__tty_colors_dummy = \
mgn= red= grn= lgn= blu= brg= std=; \
am__color_tests=no
am__tty_colors = { \
$(am__tty_colors_dummy); \
if test "X$(AM_COLOR_TESTS)" = Xno; then \
am__color_tests=no; \
elif test "X$(AM_COLOR_TESTS)" = Xalways; then \
am__color_tests=yes; \
elif test "X$$TERM" != Xdumb && { test -t 1; } 2>/dev/null; then \
am__color_tests=yes; \
fi; \
if test $$am__color_tests = yes; then \
red='[0;31m'; \
grn='[0;32m'; \
lgn='[1;32m'; \
blu='[1;34m'; \
mgn='[0;35m'; \
brg='[1m'; \
std='[m'; \
fi; \
}
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`;
am__install_max = 40
am__nobase_strip_setup = \
srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`
am__nobase_strip = \
for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||"
am__nobase_list = $(am__nobase_strip_setup); \
for p in $$list; do echo "$$p $$p"; done | \
sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \
$(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \
if (++n[$$2] == $(am__install_max)) \
{ print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \
END { for (dir in files) print dir, files[dir] }'
am__base_list = \
sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \
sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g'
am__uninstall_files_from_dir = { \
test -z "$$files" \
|| { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \
|| { echo " ( cd '$$dir' && rm -f" $$files ")"; \
$(am__cd) "$$dir" && rm -f $$files; }; \
}
am__recheck_rx = ^[ ]*:recheck:[ ]*
am__global_test_result_rx = ^[ ]*:global-test-result:[ ]*
am__copy_in_global_log_rx = ^[ ]*:copy-in-global-log:[ ]*
# A command that, given a newline-separated list of test names on the
# standard input, print the name of the tests that are to be re-run
# upon "make recheck".
am__list_recheck_tests = $(AWK) '{ \
recheck = 1; \
while ((rc = (getline line < ($$0 ".trs"))) != 0) \
{ \
if (rc < 0) \
{ \
if ((getline line2 < ($$0 ".log")) < 0) \
recheck = 0; \
break; \
} \
else if (line ~ /$(am__recheck_rx)[nN][Oo]/) \
{ \
recheck = 0; \
break; \
} \
else if (line ~ /$(am__recheck_rx)[yY][eE][sS]/) \
{ \
break; \
} \
}; \
if (recheck) \
print $$0; \
close ($$0 ".trs"); \
close ($$0 ".log"); \
}'
# A command that, given a newline-separated list of test names on the
# standard input, create the global log from their .trs and .log files.
am__create_global_log = $(AWK) ' \
function fatal(msg) \
{ \
print "fatal: making $@: " msg | "cat >&2"; \
exit 1; \
} \
function rst_section(header) \
{ \
print header; \
len = length(header); \
for (i = 1; i <= len; i = i + 1) \
printf "="; \
printf "\n\n"; \
} \
{ \
copy_in_global_log = 1; \
global_test_result = "RUN"; \
while ((rc = (getline line < ($$0 ".trs"))) != 0) \
{ \
if (rc < 0) \
fatal("failed to read from " $$0 ".trs"); \
if (line ~ /$(am__global_test_result_rx)/) \
{ \
sub("$(am__global_test_result_rx)", "", line); \
sub("[ ]*$$", "", line); \
global_test_result = line; \
} \
else if (line ~ /$(am__copy_in_global_log_rx)[nN][oO]/) \
copy_in_global_log = 0; \
}; \
if (copy_in_global_log) \
{ \
rst_section(global_test_result ": " $$0); \
while ((rc = (getline line < ($$0 ".log"))) != 0) \
{ \
if (rc < 0) \
fatal("failed to read from " $$0 ".log"); \
print line; \
}; \
printf "\n"; \
}; \
close ($$0 ".trs"); \
close ($$0 ".log"); \
}'
# Restructured Text title.
am__rst_title = { sed 's/.*/ & /;h;s/./=/g;p;x;s/ *$$//;p;g' && echo; }
# Solaris 10 'make', and several other traditional 'make' implementations,
# pass "-e" to $(SHELL), and POSIX 2008 even requires this. Work around it
# by disabling -e (using the XSI extension "set +e") if it's set.
am__sh_e_setup = case $$- in *e*) set +e;; esac
# Default flags passed to test drivers.
am__common_driver_flags = \
--color-tests "$$am__color_tests" \
--enable-hard-errors "$$am__enable_hard_errors" \
--expect-failure "$$am__expect_failure"
# To be inserted before the command running the test. Creates the
# directory for the log if needed. Stores in $dir the directory
# containing $f, in $tst the test, in $log the log. Executes the
# developer- defined test setup AM_TESTS_ENVIRONMENT (if any), and
# passes TESTS_ENVIRONMENT. Set up options for the wrapper that
# will run the test scripts (or their associated LOG_COMPILER, if
# thy have one).
am__check_pre = \
$(am__sh_e_setup); \
$(am__vpath_adj_setup) $(am__vpath_adj) \
$(am__tty_colors); \
srcdir=$(srcdir); export srcdir; \
case "$@" in \
*/*) am__odir=`echo "./$@" | sed 's|/[^/]*$$||'`;; \
*) am__odir=.;; \
esac; \
test "x$$am__odir" = x"." || test -d "$$am__odir" \
|| $(MKDIR_P) "$$am__odir" || exit $$?; \
if test -f "./$$f"; then dir=./; \
elif test -f "$$f"; then dir=; \
else dir="$(srcdir)/"; fi; \
tst=$$dir$$f; log='$@'; \
if test -n '$(DISABLE_HARD_ERRORS)'; then \
am__enable_hard_errors=no; \
else \
am__enable_hard_errors=yes; \
fi; \
case " $(XFAIL_TESTS) " in \
*[\ \ ]$$f[\ \ ]* | *[\ \ ]$$dir$$f[\ \ ]*) \
am__expect_failure=yes;; \
*) \
am__expect_failure=no;; \
esac; \
$(AM_TESTS_ENVIRONMENT) $(TESTS_ENVIRONMENT)
# A shell command to get the names of the tests scripts with any registered
# extension removed (i.e., equivalently, the names of the test logs, with
# the '.log' extension removed). The result is saved in the shell variable
# '$bases'. This honors runtime overriding of TESTS and TEST_LOGS. Sadly,
# we cannot use something simpler, involving e.g., "$(TEST_LOGS:.log=)",
# since that might cause problem with VPATH rewrites for suffix-less tests.
# See also 'test-harness-vpath-rewrite.sh' and 'test-trs-basic.sh'.
am__set_TESTS_bases = \
bases='$(TEST_LOGS)'; \
bases=`for i in $$bases; do echo $$i; done | sed 's/\.log$$//'`; \
bases=`echo $$bases`
RECHECK_LOGS = $(TEST_LOGS)
AM_RECURSIVE_TARGETS = check recheck
TEST_SUITE_LOG = test-suite.log
TEST_EXTENSIONS = @EXEEXT@ .test
LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver
LOG_COMPILE = $(LOG_COMPILER) $(AM_LOG_FLAGS) $(LOG_FLAGS)
am__set_b = \
case '$@' in \
*/*) \
case '$*' in \
*/*) b='$*';; \
*) b=`echo '$@' | sed 's/\.log$$//'`; \
esac;; \
*) \
b='$*';; \
esac
am__test_logs1 = $(TESTS:=.log)
am__test_logs2 = $(am__test_logs1:@EXEEXT@.log=.log)
TEST_LOGS = $(am__test_logs2:.test.log=.log)
TEST_LOG_DRIVER = $(SHELL) $(top_srcdir)/test-driver
TEST_LOG_COMPILE = $(TEST_LOG_COMPILER) $(AM_TEST_LOG_FLAGS) \
$(TEST_LOG_FLAGS)
am__DIST_COMMON = $(srcdir)/Makefile.in $(srcdir)/options.sh.in \
$(top_srcdir)/depcomp $(top_srcdir)/test-driver
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMTAR = @AMTAR@
AM_DEFAULT_VERBOSITY = @AM_DEFAULT_VERBOSITY@
AR = @AR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
DLLTOOL = @DLLTOOL@
DSYMUTIL = @DSYMUTIL@
DUMPBIN = @DUMPBIN@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
FEATURE_REGEXP = @FEATURE_REGEXP@
FGREP = @FGREP@
GREP = @GREP@
INSTALL = @INSTALL@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
JSON_C_CFLAGS = @JSON_C_CFLAGS@
JSON_C_LIBS = @JSON_C_LIBS@
LD = @LD@
LDFLAGS = @LDFLAGS@
LIBESTR_CFLAGS = @LIBESTR_CFLAGS@
LIBESTR_LIBS = @LIBESTR_LIBS@
LIBLOGNORM_CFLAGS = @LIBLOGNORM_CFLAGS@
LIBLOGNORM_LIBS = @LIBLOGNORM_LIBS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LIBTOOL = @LIBTOOL@
LIPO = @LIPO@
LN_S = @LN_S@
LTLIBOBJS = @LTLIBOBJS@
LT_SYS_LIBRARY_PATH = @LT_SYS_LIBRARY_PATH@
MAKEINFO = @MAKEINFO@
MANIFEST_TOOL = @MANIFEST_TOOL@
MKDIR_P = @MKDIR_P@
NM = @NM@
NMEDIT = @NMEDIT@
OBJDUMP = @OBJDUMP@
OBJEXT = @OBJEXT@
OTOOL = @OTOOL@
OTOOL64 = @OTOOL64@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_URL = @PACKAGE_URL@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
PCRE_CFLAGS = @PCRE_CFLAGS@
PCRE_LIBS = @PCRE_LIBS@
PKG_CONFIG = @PKG_CONFIG@
PKG_CONFIG_LIBDIR = @PKG_CONFIG_LIBDIR@
PKG_CONFIG_PATH = @PKG_CONFIG_PATH@
RANLIB = @RANLIB@
SED = @SED@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
SPHINXBUILD = @SPHINXBUILD@
STRIP = @STRIP@
VALGRIND = @VALGRIND@
VERSION = @VERSION@
WARN_CFLAGS = @WARN_CFLAGS@
WARN_LDFLAGS = @WARN_LDFLAGS@
WARN_SCANNERFLAGS = @WARN_SCANNERFLAGS@
abs_builddir = @abs_builddir@
abs_srcdir = @abs_srcdir@
abs_top_builddir = @abs_top_builddir@
abs_top_srcdir = @abs_top_srcdir@
ac_ct_AR = @ac_ct_AR@
ac_ct_CC = @ac_ct_CC@
ac_ct_DUMPBIN = @ac_ct_DUMPBIN@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build = @build@
build_alias = @build_alias@
build_cpu = @build_cpu@
build_os = @build_os@
build_vendor = @build_vendor@
builddir = @builddir@
datadir = @datadir@
datarootdir = @datarootdir@
docdir = @docdir@
dvidir = @dvidir@
exec_prefix = @exec_prefix@
host = @host@
host_alias = @host_alias@
host_cpu = @host_cpu@
host_os = @host_os@
host_vendor = @host_vendor@
htmldir = @htmldir@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localedir = @localedir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
pdfdir = @pdfdir@
pkg_config_libs_private = @pkg_config_libs_private@
prefix = @prefix@
program_transform_name = @program_transform_name@
psdir = @psdir@
runstatedir = @runstatedir@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
srcdir = @srcdir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
top_build_prefix = @top_build_prefix@
top_builddir = @top_builddir@
top_srcdir = @top_srcdir@
# re-enable if we really need the c program check check_PROGRAMS = json_eq user_test
json_eq_self_sources = json_eq.c
json_eq_SOURCES = $(json_eq_self_sources)
json_eq_CPPFLAGS = $(JSON_C_CFLAGS) $(WARN_CFLAGS) -I$(top_srcdir)/src
json_eq_LDADD = $(JSON_C_LIBS) -lm -lestr
json_eq_LDFLAGS = -no-install
#user_test_SOURCES = user_test.c
#user_test_CPPFLAGS = $(LIBLOGNORM_CFLAGS) $(JSON_C_CFLAGS) $(LIBESTR_CFLAGS)
#user_test_LDADD = $(JSON_C_LIBS) $(LIBLOGNORM_LIBS) $(LIBESTR_LIBS) ../compat/compat.la
#user_test_LDFLAGS = -no-install
# The following tests are for the new pdag-based engine (v2+).
#
# There are some notes due:
#
# removed field_float_with_invalid_ruledef.sh because test is not valid.
# more info: https://github.com/rsyslog/liblognorm/issues/98
# note that probably the other currently disable *_invalid_*.sh
# tests are also affected.
#
# there seems to be a problem with some format in cisco-interface-spec
# Probably this was just not seen in v1, because of some impreciseness
# in the ptree normalizer. Pushing equivalent v2 test back until v2
# implementation is further developed.
# now come tests for the legacy (v1) engine
TESTS_SHELLSCRIPTS = usrdef_simple.sh usrdef_two.sh usrdef_twotypes.sh \
usrdef_actual1.sh usrdef_ipaddr.sh usrdef_ipaddr_dotdot.sh \
usrdef_ipaddr_dotdot2.sh usrdef_ipaddr_dotdot3.sh \
usrdef_nested_segfault.sh missing_line_ending.sh \
lognormalizer-invld-call.sh string_rb_simple.sh \
string_rb_simple_2_lines.sh names.sh literal.sh include.sh \
include_RULEBASES.sh seq_simple.sh runaway_rule.sh \
runaway_rule_comment.sh annotate.sh alternative_simple.sh \
alternative_three.sh alternative_nested.sh \
alternative_segfault.sh repeat_very_simple.sh repeat_simple.sh \
repeat_mismatch_in_while.sh repeat_while_alternative.sh \
repeat_alternative_nested.sh parser_prios.sh \
parser_whitespace.sh parser_whitespace_jsoncnf.sh parser_LF.sh \
parser_LF_jsoncnf.sh strict_prefix_actual_sample1.sh \
strict_prefix_matching_1.sh strict_prefix_matching_2.sh \
field_string.sh field_string_perm_chars.sh \
field_string_lazy_matching.sh field_string_doc_sample_lazy.sh \
field_number.sh field_number-fmt_number.sh \
field_number_maxval.sh field_hexnumber.sh \
field_hexnumber-fmt_number.sh field_hexnumber_jsoncnf.sh \
field_hexnumber_range.sh field_hexnumber_range_jsoncnf.sh \
rule_last_str_short.sh field_mac48.sh field_mac48_jsoncnf.sh \
field_name_value.sh field_name_value_jsoncnf.sh \
field_kernel_timestamp.sh field_kernel_timestamp_jsoncnf.sh \
field_whitespace.sh rule_last_str_long.sh \
field_whitespace_jsoncnf.sh field_rest.sh \
field_rest_jsoncnf.sh field_json.sh field_json_jsoncnf.sh \
field_cee-syslog.sh field_cee-syslog_jsoncnf.sh field_ipv6.sh \
field_ipv6_jsoncnf.sh field_v2-iptables.sh \
field_v2-iptables_jsoncnf.sh field_cef.sh field_cef_jsoncnf.sh \
field_checkpoint-lea.sh field_checkpoint-lea_jsoncnf.sh \
field_checkpoint-lea-terminator.sh field_duration.sh \
field_duration_jsoncnf.sh field_float.sh \
field_float-fmt_number.sh field_float_jsoncnf.sh \
field_rfc5424timestamp-fmt_timestamp-unix.sh \
field_rfc5424timestamp-fmt_timestamp-unix-ms.sh \
very_long_logline_jsoncnf.sh missing_line_ending_v1.sh \
runaway_rule_v1.sh runaway_rule_comment_v1.sh \
field_hexnumber_v1.sh field_mac48_v1.sh field_name_value_v1.sh \
field_kernel_timestamp_v1.sh field_whitespace_v1.sh \
field_rest_v1.sh field_json_v1.sh field_cee-syslog_v1.sh \
field_ipv6_v1.sh field_v2-iptables_v1.sh field_cef_v1.sh \
field_checkpoint-lea_v1.sh field_duration_v1.sh \
field_float_v1.sh field_cee-syslog_v1.sh field_tokenized.sh \
field_tokenized_with_invalid_ruledef.sh field_recursive.sh \
field_tokenized_recursive.sh field_interpret.sh \
field_interpret_with_invalid_ruledef.sh field_descent.sh \
field_descent_with_invalid_ruledef.sh field_suffixed.sh \
field_suffixed_with_invalid_ruledef.sh \
field_cisco-interface-spec.sh \
field_cisco-interface-spec-at-EOL.sh \
field_float_with_invalid_ruledef.sh very_long_logline.sh
#re-add to TESTS if needed: user_test
TESTS = $(TESTS_SHELLSCRIPTS) $(am__append_1)
REGEXP_TESTS = \
field_regex_default_group_parse_and_return.sh \
field_regex_invalid_args.sh \
field_regex_with_consume_group.sh \
field_regex_with_consume_group_and_return_group.sh \
field_regex_with_negation.sh \
field_tokenized_with_regex.sh \
field_regex_while_regex_support_is_disabled.sh
EXTRA_DIST = exec.sh \
$(TESTS_SHELLSCRIPTS) \
$(REGEXP_TESTS) \
$(json_eq_self_sources) \
$(user_test_SOURCES)
all: all-am
.SUFFIXES:
.SUFFIXES: .c .lo .log .o .obj .test .test$(EXEEXT) .trs
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
( cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh ) \
&& { if test -f $@; then exit 0; else break; fi; }; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu tests/Makefile'; \
$(am__cd) $(top_srcdir) && \
$(AUTOMAKE) --gnu tests/Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(am__aclocal_m4_deps):
options.sh: $(top_builddir)/config.status $(srcdir)/options.sh.in
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@
clean-checkPROGRAMS:
@list='$(check_PROGRAMS)'; test -n "$$list" || exit 0; \
echo " rm -f" $$list; \
rm -f $$list || exit $$?; \
test -n "$(EXEEXT)" || exit 0; \
list=`for p in $$list; do echo "$$p"; done | sed 's/$(EXEEXT)$$//'`; \
echo " rm -f" $$list; \
rm -f $$list
json_eq$(EXEEXT): $(json_eq_OBJECTS) $(json_eq_DEPENDENCIES) $(EXTRA_json_eq_DEPENDENCIES)
@rm -f json_eq$(EXEEXT)
$(AM_V_CCLD)$(json_eq_LINK) $(json_eq_OBJECTS) $(json_eq_LDADD) $(LIBS)
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/json_eq-json_eq.Po@am__quote@
.c.o:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ $<
.c.obj:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(COMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ `$(CYGPATH_W) '$<'`
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(COMPILE) -c -o $@ `$(CYGPATH_W) '$<'`
.c.lo:
@am__fastdepCC_TRUE@ $(AM_V_CC)$(LTCOMPILE) -MT $@ -MD -MP -MF $(DEPDIR)/$*.Tpo -c -o $@ $<
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/$*.Tpo $(DEPDIR)/$*.Plo
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(LTCOMPILE) -c -o $@ $<
json_eq-json_eq.o: json_eq.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(json_eq_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT json_eq-json_eq.o -MD -MP -MF $(DEPDIR)/json_eq-json_eq.Tpo -c -o json_eq-json_eq.o `test -f 'json_eq.c' || echo '$(srcdir)/'`json_eq.c
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/json_eq-json_eq.Tpo $(DEPDIR)/json_eq-json_eq.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='json_eq.c' object='json_eq-json_eq.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(json_eq_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o json_eq-json_eq.o `test -f 'json_eq.c' || echo '$(srcdir)/'`json_eq.c
json_eq-json_eq.obj: json_eq.c
@am__fastdepCC_TRUE@ $(AM_V_CC)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(json_eq_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT json_eq-json_eq.obj -MD -MP -MF $(DEPDIR)/json_eq-json_eq.Tpo -c -o json_eq-json_eq.obj `if test -f 'json_eq.c'; then $(CYGPATH_W) 'json_eq.c'; else $(CYGPATH_W) '$(srcdir)/json_eq.c'; fi`
@am__fastdepCC_TRUE@ $(AM_V_at)$(am__mv) $(DEPDIR)/json_eq-json_eq.Tpo $(DEPDIR)/json_eq-json_eq.Po
@AMDEP_TRUE@@am__fastdepCC_FALSE@ $(AM_V_CC)source='json_eq.c' object='json_eq-json_eq.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(AM_V_CC@am__nodep@)$(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(json_eq_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o json_eq-json_eq.obj `if test -f 'json_eq.c'; then $(CYGPATH_W) 'json_eq.c'; else $(CYGPATH_W) '$(srcdir)/json_eq.c'; fi`
mostlyclean-libtool:
-rm -f *.lo
clean-libtool:
-rm -rf .libs _libs
ID: $(am__tagged_files)
$(am__define_uniq_tagged_files); mkid -fID $$unique
tags: tags-am
TAGS: tags
tags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
set x; \
here=`pwd`; \
$(am__define_uniq_tagged_files); \
shift; \
if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
if test $$# -gt 0; then \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
"$$@" $$unique; \
else \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$unique; \
fi; \
fi
ctags: ctags-am
CTAGS: ctags
ctags-am: $(TAGS_DEPENDENCIES) $(am__tagged_files)
$(am__define_uniq_tagged_files); \
test -z "$(CTAGS_ARGS)$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& $(am__cd) $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) "$$here"
cscopelist: cscopelist-am
cscopelist-am: $(am__tagged_files)
list='$(am__tagged_files)'; \
case "$(srcdir)" in \
[\\/]* | ?:[\\/]*) sdir="$(srcdir)" ;; \
*) sdir=$(subdir)/$(srcdir) ;; \
esac; \
for i in $$list; do \
if test -f "$$i"; then \
echo "$(subdir)/$$i"; \
else \
echo "$$sdir/$$i"; \
fi; \
done >> $(top_builddir)/cscope.files
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
# Recover from deleted '.trs' file; this should ensure that
# "rm -f foo.log; make foo.trs" re-run 'foo.test', and re-create
# both 'foo.log' and 'foo.trs'. Break the recipe in two subshells
# to avoid problems with "make -n".
.log.trs:
rm -f $< $@
$(MAKE) $(AM_MAKEFLAGS) $<
# Leading 'am--fnord' is there to ensure the list of targets does not
# expand to empty, as could happen e.g. with make check TESTS=''.
am--fnord $(TEST_LOGS) $(TEST_LOGS:.log=.trs): $(am__force_recheck)
am--force-recheck:
@:
$(TEST_SUITE_LOG): $(TEST_LOGS)
@$(am__set_TESTS_bases); \
am__f_ok () { test -f "$$1" && test -r "$$1"; }; \
redo_bases=`for i in $$bases; do \
am__f_ok $$i.trs && am__f_ok $$i.log || echo $$i; \
done`; \
if test -n "$$redo_bases"; then \
redo_logs=`for i in $$redo_bases; do echo $$i.log; done`; \
redo_results=`for i in $$redo_bases; do echo $$i.trs; done`; \
if $(am__make_dryrun); then :; else \
rm -f $$redo_logs && rm -f $$redo_results || exit 1; \
fi; \
fi; \
if test -n "$$am__remaking_logs"; then \
echo "fatal: making $(TEST_SUITE_LOG): possible infinite" \
"recursion detected" >&2; \
elif test -n "$$redo_logs"; then \
am__remaking_logs=yes $(MAKE) $(AM_MAKEFLAGS) $$redo_logs; \
fi; \
if $(am__make_dryrun); then :; else \
st=0; \
errmsg="fatal: making $(TEST_SUITE_LOG): failed to create"; \
for i in $$redo_bases; do \
test -f $$i.trs && test -r $$i.trs \
|| { echo "$$errmsg $$i.trs" >&2; st=1; }; \
test -f $$i.log && test -r $$i.log \
|| { echo "$$errmsg $$i.log" >&2; st=1; }; \
done; \
test $$st -eq 0 || exit 1; \
fi
@$(am__sh_e_setup); $(am__tty_colors); $(am__set_TESTS_bases); \
ws='[ ]'; \
results=`for b in $$bases; do echo $$b.trs; done`; \
test -n "$$results" || results=/dev/null; \
all=` grep "^$$ws*:test-result:" $$results | wc -l`; \
pass=` grep "^$$ws*:test-result:$$ws*PASS" $$results | wc -l`; \
fail=` grep "^$$ws*:test-result:$$ws*FAIL" $$results | wc -l`; \
skip=` grep "^$$ws*:test-result:$$ws*SKIP" $$results | wc -l`; \
xfail=`grep "^$$ws*:test-result:$$ws*XFAIL" $$results | wc -l`; \
xpass=`grep "^$$ws*:test-result:$$ws*XPASS" $$results | wc -l`; \
error=`grep "^$$ws*:test-result:$$ws*ERROR" $$results | wc -l`; \
if test `expr $$fail + $$xpass + $$error` -eq 0; then \
success=true; \
else \
success=false; \
fi; \
br='==================='; br=$$br$$br$$br$$br; \
result_count () \
{ \
if test x"$$1" = x"--maybe-color"; then \
maybe_colorize=yes; \
elif test x"$$1" = x"--no-color"; then \
maybe_colorize=no; \
else \
echo "$@: invalid 'result_count' usage" >&2; exit 4; \
fi; \
shift; \
desc=$$1 count=$$2; \
if test $$maybe_colorize = yes && test $$count -gt 0; then \
color_start=$$3 color_end=$$std; \
else \
color_start= color_end=; \
fi; \
echo "$${color_start}# $$desc $$count$${color_end}"; \
}; \
create_testsuite_report () \
{ \
result_count $$1 "TOTAL:" $$all "$$brg"; \
result_count $$1 "PASS: " $$pass "$$grn"; \
result_count $$1 "SKIP: " $$skip "$$blu"; \
result_count $$1 "XFAIL:" $$xfail "$$lgn"; \
result_count $$1 "FAIL: " $$fail "$$red"; \
result_count $$1 "XPASS:" $$xpass "$$red"; \
result_count $$1 "ERROR:" $$error "$$mgn"; \
}; \
{ \
echo "$(PACKAGE_STRING): $(subdir)/$(TEST_SUITE_LOG)" | \
$(am__rst_title); \
create_testsuite_report --no-color; \
echo; \
echo ".. contents:: :depth: 2"; \
echo; \
for b in $$bases; do echo $$b; done \
| $(am__create_global_log); \
} >$(TEST_SUITE_LOG).tmp || exit 1; \
mv $(TEST_SUITE_LOG).tmp $(TEST_SUITE_LOG); \
if $$success; then \
col="$$grn"; \
else \
col="$$red"; \
test x"$$VERBOSE" = x || cat $(TEST_SUITE_LOG); \
fi; \
echo "$${col}$$br$${std}"; \
echo "$${col}Testsuite summary for $(PACKAGE_STRING)$${std}"; \
echo "$${col}$$br$${std}"; \
create_testsuite_report --maybe-color; \
echo "$$col$$br$$std"; \
if $$success; then :; else \
echo "$${col}See $(subdir)/$(TEST_SUITE_LOG)$${std}"; \
if test -n "$(PACKAGE_BUGREPORT)"; then \
echo "$${col}Please report to $(PACKAGE_BUGREPORT)$${std}"; \
fi; \
echo "$$col$$br$$std"; \
fi; \
$$success || exit 1
check-TESTS:
@list='$(RECHECK_LOGS)'; test -z "$$list" || rm -f $$list
@list='$(RECHECK_LOGS:.log=.trs)'; test -z "$$list" || rm -f $$list
@test -z "$(TEST_SUITE_LOG)" || rm -f $(TEST_SUITE_LOG)
@set +e; $(am__set_TESTS_bases); \
log_list=`for i in $$bases; do echo $$i.log; done`; \
trs_list=`for i in $$bases; do echo $$i.trs; done`; \
log_list=`echo $$log_list`; trs_list=`echo $$trs_list`; \
$(MAKE) $(AM_MAKEFLAGS) $(TEST_SUITE_LOG) TEST_LOGS="$$log_list"; \
exit $$?;
recheck: all $(check_PROGRAMS)
@test -z "$(TEST_SUITE_LOG)" || rm -f $(TEST_SUITE_LOG)
@set +e; $(am__set_TESTS_bases); \
bases=`for i in $$bases; do echo $$i; done \
| $(am__list_recheck_tests)` || exit 1; \
log_list=`for i in $$bases; do echo $$i.log; done`; \
log_list=`echo $$log_list`; \
$(MAKE) $(AM_MAKEFLAGS) $(TEST_SUITE_LOG) \
am__force_recheck=am--force-recheck \
TEST_LOGS="$$log_list"; \
exit $$?
usrdef_simple.sh.log: usrdef_simple.sh
@p='usrdef_simple.sh'; \
b='usrdef_simple.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_two.sh.log: usrdef_two.sh
@p='usrdef_two.sh'; \
b='usrdef_two.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_twotypes.sh.log: usrdef_twotypes.sh
@p='usrdef_twotypes.sh'; \
b='usrdef_twotypes.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_actual1.sh.log: usrdef_actual1.sh
@p='usrdef_actual1.sh'; \
b='usrdef_actual1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_ipaddr.sh.log: usrdef_ipaddr.sh
@p='usrdef_ipaddr.sh'; \
b='usrdef_ipaddr.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_ipaddr_dotdot.sh.log: usrdef_ipaddr_dotdot.sh
@p='usrdef_ipaddr_dotdot.sh'; \
b='usrdef_ipaddr_dotdot.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_ipaddr_dotdot2.sh.log: usrdef_ipaddr_dotdot2.sh
@p='usrdef_ipaddr_dotdot2.sh'; \
b='usrdef_ipaddr_dotdot2.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_ipaddr_dotdot3.sh.log: usrdef_ipaddr_dotdot3.sh
@p='usrdef_ipaddr_dotdot3.sh'; \
b='usrdef_ipaddr_dotdot3.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
usrdef_nested_segfault.sh.log: usrdef_nested_segfault.sh
@p='usrdef_nested_segfault.sh'; \
b='usrdef_nested_segfault.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
missing_line_ending.sh.log: missing_line_ending.sh
@p='missing_line_ending.sh'; \
b='missing_line_ending.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
lognormalizer-invld-call.sh.log: lognormalizer-invld-call.sh
@p='lognormalizer-invld-call.sh'; \
b='lognormalizer-invld-call.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
string_rb_simple.sh.log: string_rb_simple.sh
@p='string_rb_simple.sh'; \
b='string_rb_simple.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
string_rb_simple_2_lines.sh.log: string_rb_simple_2_lines.sh
@p='string_rb_simple_2_lines.sh'; \
b='string_rb_simple_2_lines.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
names.sh.log: names.sh
@p='names.sh'; \
b='names.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
literal.sh.log: literal.sh
@p='literal.sh'; \
b='literal.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
include.sh.log: include.sh
@p='include.sh'; \
b='include.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
include_RULEBASES.sh.log: include_RULEBASES.sh
@p='include_RULEBASES.sh'; \
b='include_RULEBASES.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
seq_simple.sh.log: seq_simple.sh
@p='seq_simple.sh'; \
b='seq_simple.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
runaway_rule.sh.log: runaway_rule.sh
@p='runaway_rule.sh'; \
b='runaway_rule.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
runaway_rule_comment.sh.log: runaway_rule_comment.sh
@p='runaway_rule_comment.sh'; \
b='runaway_rule_comment.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
annotate.sh.log: annotate.sh
@p='annotate.sh'; \
b='annotate.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
alternative_simple.sh.log: alternative_simple.sh
@p='alternative_simple.sh'; \
b='alternative_simple.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
alternative_three.sh.log: alternative_three.sh
@p='alternative_three.sh'; \
b='alternative_three.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
alternative_nested.sh.log: alternative_nested.sh
@p='alternative_nested.sh'; \
b='alternative_nested.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
alternative_segfault.sh.log: alternative_segfault.sh
@p='alternative_segfault.sh'; \
b='alternative_segfault.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
repeat_very_simple.sh.log: repeat_very_simple.sh
@p='repeat_very_simple.sh'; \
b='repeat_very_simple.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
repeat_simple.sh.log: repeat_simple.sh
@p='repeat_simple.sh'; \
b='repeat_simple.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
repeat_mismatch_in_while.sh.log: repeat_mismatch_in_while.sh
@p='repeat_mismatch_in_while.sh'; \
b='repeat_mismatch_in_while.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
repeat_while_alternative.sh.log: repeat_while_alternative.sh
@p='repeat_while_alternative.sh'; \
b='repeat_while_alternative.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
repeat_alternative_nested.sh.log: repeat_alternative_nested.sh
@p='repeat_alternative_nested.sh'; \
b='repeat_alternative_nested.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
parser_prios.sh.log: parser_prios.sh
@p='parser_prios.sh'; \
b='parser_prios.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
parser_whitespace.sh.log: parser_whitespace.sh
@p='parser_whitespace.sh'; \
b='parser_whitespace.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
parser_whitespace_jsoncnf.sh.log: parser_whitespace_jsoncnf.sh
@p='parser_whitespace_jsoncnf.sh'; \
b='parser_whitespace_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
parser_LF.sh.log: parser_LF.sh
@p='parser_LF.sh'; \
b='parser_LF.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
parser_LF_jsoncnf.sh.log: parser_LF_jsoncnf.sh
@p='parser_LF_jsoncnf.sh'; \
b='parser_LF_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
strict_prefix_actual_sample1.sh.log: strict_prefix_actual_sample1.sh
@p='strict_prefix_actual_sample1.sh'; \
b='strict_prefix_actual_sample1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
strict_prefix_matching_1.sh.log: strict_prefix_matching_1.sh
@p='strict_prefix_matching_1.sh'; \
b='strict_prefix_matching_1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
strict_prefix_matching_2.sh.log: strict_prefix_matching_2.sh
@p='strict_prefix_matching_2.sh'; \
b='strict_prefix_matching_2.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_string.sh.log: field_string.sh
@p='field_string.sh'; \
b='field_string.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_string_perm_chars.sh.log: field_string_perm_chars.sh
@p='field_string_perm_chars.sh'; \
b='field_string_perm_chars.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_string_lazy_matching.sh.log: field_string_lazy_matching.sh
@p='field_string_lazy_matching.sh'; \
b='field_string_lazy_matching.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_string_doc_sample_lazy.sh.log: field_string_doc_sample_lazy.sh
@p='field_string_doc_sample_lazy.sh'; \
b='field_string_doc_sample_lazy.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_number.sh.log: field_number.sh
@p='field_number.sh'; \
b='field_number.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_number-fmt_number.sh.log: field_number-fmt_number.sh
@p='field_number-fmt_number.sh'; \
b='field_number-fmt_number.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_number_maxval.sh.log: field_number_maxval.sh
@p='field_number_maxval.sh'; \
b='field_number_maxval.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_hexnumber.sh.log: field_hexnumber.sh
@p='field_hexnumber.sh'; \
b='field_hexnumber.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_hexnumber-fmt_number.sh.log: field_hexnumber-fmt_number.sh
@p='field_hexnumber-fmt_number.sh'; \
b='field_hexnumber-fmt_number.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_hexnumber_jsoncnf.sh.log: field_hexnumber_jsoncnf.sh
@p='field_hexnumber_jsoncnf.sh'; \
b='field_hexnumber_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_hexnumber_range.sh.log: field_hexnumber_range.sh
@p='field_hexnumber_range.sh'; \
b='field_hexnumber_range.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_hexnumber_range_jsoncnf.sh.log: field_hexnumber_range_jsoncnf.sh
@p='field_hexnumber_range_jsoncnf.sh'; \
b='field_hexnumber_range_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
rule_last_str_short.sh.log: rule_last_str_short.sh
@p='rule_last_str_short.sh'; \
b='rule_last_str_short.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_mac48.sh.log: field_mac48.sh
@p='field_mac48.sh'; \
b='field_mac48.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_mac48_jsoncnf.sh.log: field_mac48_jsoncnf.sh
@p='field_mac48_jsoncnf.sh'; \
b='field_mac48_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_name_value.sh.log: field_name_value.sh
@p='field_name_value.sh'; \
b='field_name_value.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_name_value_jsoncnf.sh.log: field_name_value_jsoncnf.sh
@p='field_name_value_jsoncnf.sh'; \
b='field_name_value_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_kernel_timestamp.sh.log: field_kernel_timestamp.sh
@p='field_kernel_timestamp.sh'; \
b='field_kernel_timestamp.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_kernel_timestamp_jsoncnf.sh.log: field_kernel_timestamp_jsoncnf.sh
@p='field_kernel_timestamp_jsoncnf.sh'; \
b='field_kernel_timestamp_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_whitespace.sh.log: field_whitespace.sh
@p='field_whitespace.sh'; \
b='field_whitespace.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
rule_last_str_long.sh.log: rule_last_str_long.sh
@p='rule_last_str_long.sh'; \
b='rule_last_str_long.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_whitespace_jsoncnf.sh.log: field_whitespace_jsoncnf.sh
@p='field_whitespace_jsoncnf.sh'; \
b='field_whitespace_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_rest.sh.log: field_rest.sh
@p='field_rest.sh'; \
b='field_rest.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_rest_jsoncnf.sh.log: field_rest_jsoncnf.sh
@p='field_rest_jsoncnf.sh'; \
b='field_rest_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_json.sh.log: field_json.sh
@p='field_json.sh'; \
b='field_json.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_json_jsoncnf.sh.log: field_json_jsoncnf.sh
@p='field_json_jsoncnf.sh'; \
b='field_json_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cee-syslog.sh.log: field_cee-syslog.sh
@p='field_cee-syslog.sh'; \
b='field_cee-syslog.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cee-syslog_jsoncnf.sh.log: field_cee-syslog_jsoncnf.sh
@p='field_cee-syslog_jsoncnf.sh'; \
b='field_cee-syslog_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_ipv6.sh.log: field_ipv6.sh
@p='field_ipv6.sh'; \
b='field_ipv6.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_ipv6_jsoncnf.sh.log: field_ipv6_jsoncnf.sh
@p='field_ipv6_jsoncnf.sh'; \
b='field_ipv6_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_v2-iptables.sh.log: field_v2-iptables.sh
@p='field_v2-iptables.sh'; \
b='field_v2-iptables.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_v2-iptables_jsoncnf.sh.log: field_v2-iptables_jsoncnf.sh
@p='field_v2-iptables_jsoncnf.sh'; \
b='field_v2-iptables_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cef.sh.log: field_cef.sh
@p='field_cef.sh'; \
b='field_cef.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cef_jsoncnf.sh.log: field_cef_jsoncnf.sh
@p='field_cef_jsoncnf.sh'; \
b='field_cef_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_checkpoint-lea.sh.log: field_checkpoint-lea.sh
@p='field_checkpoint-lea.sh'; \
b='field_checkpoint-lea.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_checkpoint-lea_jsoncnf.sh.log: field_checkpoint-lea_jsoncnf.sh
@p='field_checkpoint-lea_jsoncnf.sh'; \
b='field_checkpoint-lea_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_checkpoint-lea-terminator.sh.log: field_checkpoint-lea-terminator.sh
@p='field_checkpoint-lea-terminator.sh'; \
b='field_checkpoint-lea-terminator.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_duration.sh.log: field_duration.sh
@p='field_duration.sh'; \
b='field_duration.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_duration_jsoncnf.sh.log: field_duration_jsoncnf.sh
@p='field_duration_jsoncnf.sh'; \
b='field_duration_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_float.sh.log: field_float.sh
@p='field_float.sh'; \
b='field_float.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_float-fmt_number.sh.log: field_float-fmt_number.sh
@p='field_float-fmt_number.sh'; \
b='field_float-fmt_number.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_float_jsoncnf.sh.log: field_float_jsoncnf.sh
@p='field_float_jsoncnf.sh'; \
b='field_float_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_rfc5424timestamp-fmt_timestamp-unix.sh.log: field_rfc5424timestamp-fmt_timestamp-unix.sh
@p='field_rfc5424timestamp-fmt_timestamp-unix.sh'; \
b='field_rfc5424timestamp-fmt_timestamp-unix.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_rfc5424timestamp-fmt_timestamp-unix-ms.sh.log: field_rfc5424timestamp-fmt_timestamp-unix-ms.sh
@p='field_rfc5424timestamp-fmt_timestamp-unix-ms.sh'; \
b='field_rfc5424timestamp-fmt_timestamp-unix-ms.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
very_long_logline_jsoncnf.sh.log: very_long_logline_jsoncnf.sh
@p='very_long_logline_jsoncnf.sh'; \
b='very_long_logline_jsoncnf.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
missing_line_ending_v1.sh.log: missing_line_ending_v1.sh
@p='missing_line_ending_v1.sh'; \
b='missing_line_ending_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
runaway_rule_v1.sh.log: runaway_rule_v1.sh
@p='runaway_rule_v1.sh'; \
b='runaway_rule_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
runaway_rule_comment_v1.sh.log: runaway_rule_comment_v1.sh
@p='runaway_rule_comment_v1.sh'; \
b='runaway_rule_comment_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_hexnumber_v1.sh.log: field_hexnumber_v1.sh
@p='field_hexnumber_v1.sh'; \
b='field_hexnumber_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_mac48_v1.sh.log: field_mac48_v1.sh
@p='field_mac48_v1.sh'; \
b='field_mac48_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_name_value_v1.sh.log: field_name_value_v1.sh
@p='field_name_value_v1.sh'; \
b='field_name_value_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_kernel_timestamp_v1.sh.log: field_kernel_timestamp_v1.sh
@p='field_kernel_timestamp_v1.sh'; \
b='field_kernel_timestamp_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_whitespace_v1.sh.log: field_whitespace_v1.sh
@p='field_whitespace_v1.sh'; \
b='field_whitespace_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_rest_v1.sh.log: field_rest_v1.sh
@p='field_rest_v1.sh'; \
b='field_rest_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_json_v1.sh.log: field_json_v1.sh
@p='field_json_v1.sh'; \
b='field_json_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cee-syslog_v1.sh.log: field_cee-syslog_v1.sh
@p='field_cee-syslog_v1.sh'; \
b='field_cee-syslog_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_ipv6_v1.sh.log: field_ipv6_v1.sh
@p='field_ipv6_v1.sh'; \
b='field_ipv6_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_v2-iptables_v1.sh.log: field_v2-iptables_v1.sh
@p='field_v2-iptables_v1.sh'; \
b='field_v2-iptables_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cef_v1.sh.log: field_cef_v1.sh
@p='field_cef_v1.sh'; \
b='field_cef_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_checkpoint-lea_v1.sh.log: field_checkpoint-lea_v1.sh
@p='field_checkpoint-lea_v1.sh'; \
b='field_checkpoint-lea_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_duration_v1.sh.log: field_duration_v1.sh
@p='field_duration_v1.sh'; \
b='field_duration_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_float_v1.sh.log: field_float_v1.sh
@p='field_float_v1.sh'; \
b='field_float_v1.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_tokenized.sh.log: field_tokenized.sh
@p='field_tokenized.sh'; \
b='field_tokenized.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_tokenized_with_invalid_ruledef.sh.log: field_tokenized_with_invalid_ruledef.sh
@p='field_tokenized_with_invalid_ruledef.sh'; \
b='field_tokenized_with_invalid_ruledef.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_recursive.sh.log: field_recursive.sh
@p='field_recursive.sh'; \
b='field_recursive.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_tokenized_recursive.sh.log: field_tokenized_recursive.sh
@p='field_tokenized_recursive.sh'; \
b='field_tokenized_recursive.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_interpret.sh.log: field_interpret.sh
@p='field_interpret.sh'; \
b='field_interpret.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_interpret_with_invalid_ruledef.sh.log: field_interpret_with_invalid_ruledef.sh
@p='field_interpret_with_invalid_ruledef.sh'; \
b='field_interpret_with_invalid_ruledef.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_descent.sh.log: field_descent.sh
@p='field_descent.sh'; \
b='field_descent.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_descent_with_invalid_ruledef.sh.log: field_descent_with_invalid_ruledef.sh
@p='field_descent_with_invalid_ruledef.sh'; \
b='field_descent_with_invalid_ruledef.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_suffixed.sh.log: field_suffixed.sh
@p='field_suffixed.sh'; \
b='field_suffixed.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_suffixed_with_invalid_ruledef.sh.log: field_suffixed_with_invalid_ruledef.sh
@p='field_suffixed_with_invalid_ruledef.sh'; \
b='field_suffixed_with_invalid_ruledef.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cisco-interface-spec.sh.log: field_cisco-interface-spec.sh
@p='field_cisco-interface-spec.sh'; \
b='field_cisco-interface-spec.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_cisco-interface-spec-at-EOL.sh.log: field_cisco-interface-spec-at-EOL.sh
@p='field_cisco-interface-spec-at-EOL.sh'; \
b='field_cisco-interface-spec-at-EOL.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_float_with_invalid_ruledef.sh.log: field_float_with_invalid_ruledef.sh
@p='field_float_with_invalid_ruledef.sh'; \
b='field_float_with_invalid_ruledef.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
very_long_logline.sh.log: very_long_logline.sh
@p='very_long_logline.sh'; \
b='very_long_logline.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_regex_default_group_parse_and_return.sh.log: field_regex_default_group_parse_and_return.sh
@p='field_regex_default_group_parse_and_return.sh'; \
b='field_regex_default_group_parse_and_return.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_regex_invalid_args.sh.log: field_regex_invalid_args.sh
@p='field_regex_invalid_args.sh'; \
b='field_regex_invalid_args.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_regex_with_consume_group.sh.log: field_regex_with_consume_group.sh
@p='field_regex_with_consume_group.sh'; \
b='field_regex_with_consume_group.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_regex_with_consume_group_and_return_group.sh.log: field_regex_with_consume_group_and_return_group.sh
@p='field_regex_with_consume_group_and_return_group.sh'; \
b='field_regex_with_consume_group_and_return_group.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_regex_with_negation.sh.log: field_regex_with_negation.sh
@p='field_regex_with_negation.sh'; \
b='field_regex_with_negation.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_tokenized_with_regex.sh.log: field_tokenized_with_regex.sh
@p='field_tokenized_with_regex.sh'; \
b='field_tokenized_with_regex.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
field_regex_while_regex_support_is_disabled.sh.log: field_regex_while_regex_support_is_disabled.sh
@p='field_regex_while_regex_support_is_disabled.sh'; \
b='field_regex_while_regex_support_is_disabled.sh'; \
$(am__check_pre) $(LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_LOG_DRIVER_FLAGS) $(LOG_DRIVER_FLAGS) -- $(LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
.test.log:
@p='$<'; \
$(am__set_b); \
$(am__check_pre) $(TEST_LOG_DRIVER) --test-name "$$f" \
--log-file $$b.log --trs-file $$b.trs \
$(am__common_driver_flags) $(AM_TEST_LOG_DRIVER_FLAGS) $(TEST_LOG_DRIVER_FLAGS) -- $(TEST_LOG_COMPILE) \
"$$tst" $(AM_TESTS_FD_REDIRECT)
@am__EXEEXT_TRUE@.test$(EXEEXT).log:
@am__EXEEXT_TRUE@ @p='$<'; \
@am__EXEEXT_TRUE@ $(am__set_b); \
@am__EXEEXT_TRUE@ $(am__check_pre) $(TEST_LOG_DRIVER) --test-name "$$f" \
@am__EXEEXT_TRUE@ --log-file $$b.log --trs-file $$b.trs \
@am__EXEEXT_TRUE@ $(am__common_driver_flags) $(AM_TEST_LOG_DRIVER_FLAGS) $(TEST_LOG_DRIVER_FLAGS) -- $(TEST_LOG_COMPILE) \
@am__EXEEXT_TRUE@ "$$tst" $(AM_TESTS_FD_REDIRECT)
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \
list='$(DISTFILES)'; \
dist_files=`for file in $$list; do echo $$file; done | \
sed -e "s|^$$srcdirstrip/||;t" \
-e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \
case $$dist_files in \
*/*) $(MKDIR_P) `echo "$$dist_files" | \
sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \
sort -u` ;; \
esac; \
for file in $$dist_files; do \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
if test -d $$d/$$file; then \
dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \
if test -d "$(distdir)/$$file"; then \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \
find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \
fi; \
cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \
else \
test -f "$(distdir)/$$file" \
|| cp -p $$d/$$file "$(distdir)/$$file" \
|| exit 1; \
fi; \
done
check-am: all-am
$(MAKE) $(AM_MAKEFLAGS) $(check_PROGRAMS)
$(MAKE) $(AM_MAKEFLAGS) check-TESTS
check: check-am
all-am: Makefile
installdirs:
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
if test -z '$(STRIP)'; then \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
install; \
else \
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
"INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \
fi
mostlyclean-generic:
-test -z "$(TEST_LOGS)" || rm -f $(TEST_LOGS)
-test -z "$(TEST_LOGS:.log=.trs)" || rm -f $(TEST_LOGS:.log=.trs)
-test -z "$(TEST_SUITE_LOG)" || rm -f $(TEST_SUITE_LOG)
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
-test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-checkPROGRAMS clean-generic clean-libtool \
mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
html-am:
info: info-am
info-am:
install-data-am:
install-dvi: install-dvi-am
install-dvi-am:
install-exec-am:
install-html: install-html-am
install-html-am:
install-info: install-info-am
install-info-am:
install-man:
install-pdf: install-pdf-am
install-pdf-am:
install-ps: install-ps-am
install-ps-am:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic \
mostlyclean-libtool
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am:
.MAKE: check-am install-am install-strip
.PHONY: CTAGS GTAGS TAGS all all-am check check-TESTS check-am clean \
clean-checkPROGRAMS clean-generic clean-libtool cscopelist-am \
ctags ctags-am distclean distclean-compile distclean-generic \
distclean-libtool distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-dvi install-dvi-am install-exec \
install-exec-am install-html install-html-am install-info \
install-info-am install-man install-pdf install-pdf-am \
install-ps install-ps-am install-strip installcheck \
installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-compile \
mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \
recheck tags tags-am uninstall uninstall-am
.PRECIOUS: Makefile
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:
liblognorm-2.0.6/tests/field_number-fmt_number.sh 0000755 0001750 0001750 00000001137 13370250152 017074 0000000 0000000 #!/bin/bash
# added 2017-10-02 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "number field in native format"
add_rule 'version=2'
add_rule 'rule=:here is a number %{ "type":"number", "name":"num", "format":"number"}% in dec form'
execute 'here is a number 1234 in dec form'
assert_output_json_eq '{"num": 1234}'
#check cases where parsing failure must occur
execute 'here is a number 1234in dec form'
assert_output_json_eq '{ "originalmsg": "here is a number 1234in dec form", "unparsed-data": "in dec form" }'
cleanup_tmp_files
liblognorm-2.0.6/tests/exec.sh 0000644 0001750 0001750 00000004427 13370250152 013233 0000000 0000000 # environment variables:
# GREP - if set, can be used to use alternative grep version
# Most important use case is to use GNU grep (ggrep)
# on Solaris. If unset, use "grep".
set -e
if [ "x$debug" == "xon" ]; then #get core-dump on crash
ulimit -c unlimited
fi
#cmd="../src/ln_test -v" # case to get debug info (add -vvv for more verbosity)
cmd=../src/ln_test # regular case
. ./options.sh
no_solaris10() {
if (uname -a | grep -q "SunOS.*5.10"); then
printf 'platform: %s\n' "$(uname -a)"
printf 'This looks like solaris 10, we disable known-failing tests to\n'
printf 'permit OpenCSW to build packages. However, this are real failures\n'
printf 'and so a fix should be done as soon as time permits.\n'
exit 77
fi
}
test_def() {
test_file=$(basename $1)
test_name=$(echo $test_file | sed -e 's/\..*//g')
echo ===============================================================================
echo "[${test_file}]: test for ${2}"
}
execute() {
if [ "x$debug" == "xon" ]; then
echo "======rulebase======="
cat tmp.rulebase
echo "====================="
set -x
fi
if [ "$1" == "file" ]; then
$cmd $ln_opts -r tmp.rulebase -e json > test.out < $2
else
echo "$1" | $cmd $ln_opts -r tmp.rulebase -e json > test.out
fi
echo "Out:"
cat test.out
if [ "x$debug" == "xon" ]; then
set +x
fi
}
execute_with_string() {
# $1 must be rulebase string
# $2 must be sample string
if [ "x$debug" == "xon" ]; then
echo "======rulebase======="
cat tmp.rulebase
echo "====================="
set -x
fi
echo "$2" | $cmd $ln_opts -R "$1" -e json > test.out
echo "Out:"
cat test.out
if [ "x$debug" == "xon" ]; then
set +x
fi
}
assert_output_contains() {
${GREP:-grep} -F "$1" < test.out
}
assert_output_json_eq() {
./json_eq "$1" "$(cat test.out)"
}
rulebase_file_name() {
if [ "x$1" == "x" ]; then
echo tmp.rulebase
else
echo $1.rulebase
fi
}
reset_rules() {
rb_file=$(rulebase_file_name $1)
rm -f $rb_file
}
add_rule() {
rb_file=$(rulebase_file_name $2)
echo $1 >> $rb_file
}
add_rule_no_LF() {
rb_file=$(rulebase_file_name $2)
echo -n $1 >> $rb_file
}
cleanup_tmp_files() {
rm -f -- test.out *.rulebase
}
reset_rules
liblognorm-2.0.6/tests/field_float_v1.sh 0000755 0001750 0001750 00000001256 13370250152 015165 0000000 0000000 #!/bin/bash
# added 2015-02-25 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
no_solaris10
test_def $0 "float field"
add_rule 'rule=:here is a number %num:float% in floating pt form'
execute 'here is a number 15.9 in floating pt form'
assert_output_json_eq '{"num": "15.9"}'
reset_rules
add_rule 'rule=:here is a negative number %num:float% for you'
execute 'here is a negative number -4.2 for you'
assert_output_json_eq '{"num": "-4.2"}'
reset_rules
add_rule 'rule=:here is another real number %real_no:float%.'
execute 'here is another real number 2.71.'
assert_output_json_eq '{"real_no": "2.71"}'
cleanup_tmp_files
liblognorm-2.0.6/tests/very_long_logline_jsoncnf.sh 0000755 0001750 0001750 00000000745 13370250152 017546 0000000 0000000 #!/bin/bash
# added 2015-09-21 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
msg="foo"
for i in $(seq 1 10); do
msg="${msg},${msg},abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ${i}"
done
test_def $0 "float field"
add_rule 'version=2'
add_rule 'rule=:%{"name":"line", "type":"rest"}%'
execute $msg
assert_output_json_eq "{\"line\": \"$msg\"}"
cleanup_tmp_files
liblognorm-2.0.6/tests/very_long_logline.sh 0000755 0001750 0001750 00000000673 13370250152 016026 0000000 0000000 #!/bin/bash
# added 2015-09-21 by singh.janmejay
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
msg="foo"
for i in $(seq 1 10); do
msg="${msg},${msg},abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ${i}"
done
test_def $0 "float field"
add_rule 'rule=:%line:rest%'
execute $msg
assert_output_json_eq "{\"line\": \"$msg\"}"
cleanup_tmp_files
liblognorm-2.0.6/tests/parser_prios.sh 0000755 0001750 0001750 00000002132 13370250152 015011 0000000 0000000 #!/bin/bash
# added 2015-05-05 by Rainer Gerhards
# This file is part of the liblognorm project, released under ASL 2.0
. $srcdir/exec.sh
test_def $0 "parser priorities, simple case"
add_rule 'version=2'
add_rule 'rule=:%{"name":"field", "type":"mac48"}%'
add_rule 'rule=:%{"name":"rest", "type":"rest"}%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"field": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"field": "f0-f6-1c-5f-cc-a2"}'
# things that need to match rest
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "rest": "f0-f6:1c:5f:cc-a2" }'
# now the same with inverted priorites. We should now always have
# rest matches.
reset_rules
add_rule 'version=2'
add_rule 'rule=:%{"name":"field", "type":"mac48", "priority":100}%'
add_rule 'rule=:%{"name":"rest", "type":"rest", "priority":10}%'
execute 'f0:f6:1c:5f:cc:a2'
assert_output_json_eq '{"rest": "f0:f6:1c:5f:cc:a2"}'
execute 'f0-f6-1c-5f-cc-a2'
assert_output_json_eq '{"rest": "f0-f6-1c-5f-cc-a2"}'
execute 'f0-f6:1c:5f:cc-a2'
assert_output_json_eq '{ "rest": "f0-f6:1c:5f:cc-a2" }'
cleanup_tmp_files
liblognorm-2.0.6/ChangeLog 0000644 0001750 0001750 00000033147 13370251116 012365 0000000 0000000 ----------------------------------------------------------------------
Version 2.0.6, 2018-11-06
- implement Checkpoint LEA transfer format
... at least if we guess right that this is the format name. This
type of format seems to be seen in syslog message. Checkpoint does
not provide a spec, so everything is guesswork... :-(
closes https://github.com/rsyslog/liblognorm/issues/309
- made build on AIX
Thanks to Philippe Duveau for the patch.
- fixes and improvements in bash scripting
mostly based on shellcheck recommandations (via CodeFactor.com)
- string parser: add "lazy" matching mode
This introduces paramter "matching.lazy". See doc for details.
- bugfix: suppress invalid param error for field name "-"
Suppress invalid param error for name for hexnumber, float, number,
date-rfc3164 and date-rfc5424. It will just check if name is "-" to
make sure that we only suppress the error message in case we do not
want to capture something.
Thanks to Sol Huebner for the patch.
closes https://github.com/rsyslog/liblognorm/issues/270
- bugfix: cisco-interface-spec did not succeed when at end of line
Thanks to Sol Huebner for the patch.
closes https://github.com/rsyslog/liblognorm/issues/229
----------------------------------------------------------------------
Version 2.0.5, 2018-04-26
- bugfix: es_str2cstr leak in string-to v1 parser
Thanks to Harshvardhan Shrivastava for the patch.
- make "make check" "succeed" on solaris 10
actually, we just ignore the CI failures so that OpenCSW can build
new packages. The problems actually exist on that platform, but
testing has shown they always existed. We currently run out of time
to really fixing this, plus we never had any bug report on Solaris
(I assme nobody uses it on Solaris 10). However, that issues is a
blocker to make new rsyslog versions available on OpenCSW for
Solaris 10, so we go the dirty way of pretenting there is no
problem. Note: the issues was orignally not seen, as the failing
tests have been added later on. So the problem was always there,
just not visible.
- some mostly cosmetic fixes detected by Coverity Scan
e. g. memory leak if and only if system was completely out of memory
----------------------------------------------------------------------
Version 2.0.4, 2017-10-04
- added support for native JSON number formats
supported by parsers: number, float, hex
- added support for creating unix timestamps
supported by parsers: date-rfc3164, date-rfc5424
- fixed build problems on Solaris
... but there still seem to be some code issues, manifested in
testbench failures. So use with care!
----------------------------------------------------------------------
Version 2.0.3, 2017-03-22
- add ability to load rulebase from a string
introduces new API:
int ln_loadSamplesFromString(ln_ctx ctx, const char *string);
closes https://github.com/rsyslog/liblognorm/issues/239
- bugfix: string parser did not correctly parse word at end of line
- bugfix: literal parser does not always store value if name is specified
if
rule=:%{"type":"literal", "text":"a", "name":"var"}%
is used and matching message is provided, variable var ist not persisted.
see also http://lists.adiscon.net/pipermail/rsyslog/2016-December/043985.html
----------------------------------------------------------------------
Version 2.0.2, 2016-11-15
- bugfix: no error was emitted on invalid "annotate" line
- "annnotate": permit inline comments
- fix a problem with cross-compilation
see also: https://github.com/rsyslog/liblognorm/pull/221
Thanks to Luca Boccassi for the patch
- testbench: add test for "annotate" functionality
- bugfix: abort in literal path compaction when useing "alternative" parser
When using the "alternative" parser, literals nodes could be created with
multiple reference count. This is valid. However, literal path compaction
did not consider this case, and so "merged" these nodes, which lead to
pdag corruption and quickly to segfault.
closes https://github.com/rsyslog/liblognorm/issues/220
closes https://github.com/rsyslog/liblognorm/issues/153
- bugfix: lognormalizer could loop
This also caused the testbench to fail on some platforms.
due too incorrect data type
Thanks to Michael Biebl for this fix.
- fix misleading compiler warning
Thanks to Michael Biebl for this fix.
- testbench: add test for "annotate" functionality
----------------------------------------------------------------------
Version 2.0.1, 2016-08-01
- fix public headers, which invalidly contained a strndup() definition
Thanks to Michael Biebl for this fix.
- fix some issues in pkgconfig file
Thanks to Michael Biebl for this fix.
- enhance build system to natively support systems with older
autoconf versions and/or missing autoconf-archive. In this case we
gracefully degrade functionality, but the build still is possible.
Among others, this enables builds on CentOS 5.
----------------------------------------------------------------------
Version 2.0.0, 2016-07-21
- completely rewritten, much feature-enhanced version
- requires libfastjson instead of json-c
- big improvements to testbench runs, especially on travis
among others, the static analyzer is now run and testbench throws
an error if the static analyzer (via clang) is not clean
- lognormalizer tool can now handle lines larger 10k characters
Thanks to Janmejay Singh for the patch
----------------------------------------------------------------------
Version 1.1.3, 2015-??-?? [no official release]
- make work on Solaris
- check for runaway rules.
A runaway rule is one that has unmatched percent signs and thus
is not terminated properly at its end. This also means we no longer
accept "rule=" at the first column of a continuation line, which is
no problem (see doc for more information).
- fix: process last line if it misses the terminating LF
This problem occurs with the very last line of a rulebase (at EOF).
If it is not properly terminated (LF missing), it is silently ignored.
Previous versions did obviously process lines in this case. While
technically this is invalid input, we can't outrule that such rulebases
exist. For example, they do in the rsyslog testbench, which made
us aware of the problem (see https://github.com/rsyslog/rsyslog/issues/489 )
I think the proper way of addressing this is to process such lines without
termination, as many other tools do as well.
closes https://github.com/rsyslog/liblognorm/issues/135
----------------------------------------------------------------------
Version 1.1.2, 2015-07-20
- permit newline inside parser definition
- new parser "cisco-interface-spec"
- new parser "json" to process json parts of the message
- new parser "mac48" to process mac layer addresses
- new parser "name-value-list" (currently inofficial, experimental)
- some parsers did incorrectly report success when an error occurred
this was caused by inconsistencies between various macros. We have
changed the parser-generation macros to match the semantics of the
broader CHKN/CHKR macros and also restructured/simplified the
parser generation macros.
closes https://github.com/rsyslog/liblognorm/issues/41
- call "rest" parser only if nothing else matches.
Versions prior to 1.1.2 did execute "rest" during regular parser
processing, and thus parser matches have been more or less random.
With 1.1.2 this is now always the last parser called. This may cause
problems with existing rulesets, HOWEVER, adding any other rule or
changing the load order would also have caused problems, so there
really is no compatibility to preserve.
see also:
http://blog.gerhards.net/2015/04/liblognorms-rest-parser-now-more-useful.html
- new API to support error callbacks
This permits callers to forward messages in regard to e.g. wrong rule
bases to their users, which is very useful and actually missing in the
previous code base. So far, we only have few error messages.
However, we will review the code and add more. The important part is
that callers can begin to use the new API and thus will benefit when
we add more error messages.
- testbench is now enabled by default
- bugfix: misadressing on some constant values
see also https://github.com/rsyslog/liblognorm/pull/67
Thanks to github user ontholerian for the patch
- bugfix: add missing function prototypes
This could potentially lead to problems on some platforms,
especially those with 64 bit pointers.
----------------------------------------------------------------------
Version 1.1.1, 2015-03-09
- fixed library version numbering
Thanks to Tomas Heinreich for reporting the problem.
- added new parser syntaxes
Thanks to Janmejay Singh for implementing most of them.
- bugfix: function ln_parseFieldDescr() returns state value
due to unitialized variable. This can also lead to invalid
returning no sample node where one would have to be created.
----------------------------------------------------------------------
Version 1.1.0, 2015-01-08
- added regular expression support
use this feature with great care, as it thrashes performance
Thanks to Janmejay Singh for implementing this feature.
- fix build problem when --enable-debug was set
closes: https://github.com/rsyslog/liblognorm/issues/5
----------------------------------------------------------------------
Version 1.0.1, 2014-04-11
- improved doc (via RST/Sphinx)
- bugfix: unparsed fields were copied incorrectly from non-terminated
string. Thanks to Josh Blum for the fix.
- bugfix: mandatory tag did not work in lognormalizer
----------------------------------------------------------------------
Version 1.0.0, 2013-11-28
- WARNING: this version has incompatible interface and older programs
will not compile with it.
For details see http://www.liblognorm.com/news/on-liblognorm-1-0-0/
- libestr is not used any more in interface functions. Traditional
C strings are used instead. Internally, libestr is still used, but
scheduled for removal.
- libee is not used any more. JSON-C is used for object handling
instead. Parsers and formatters are now part of liblognorm.
- added new field type "rest", which simply sinks all up to end of
the string.
- added support for glueing two fields together, without literal
between them. It allows for constructs like:
%volume:number%%unit:word%
which matches string "1000Kbps"
- Fix incorrect merging of trees with empty literal at end
Thanks to Pavel Levshin for the patch
- this version has survived many bugfixes
----------------------------------------------------------------------
================================================================================
The versions below is liblognorm0, which has a different API
================================================================================
----------------------------------------------------------------------
Version 0.3.7, 2013-07-17
- added support to load single samples
Thanks to John Hopper for the patch
----------------------------------------------------------------------
Version 0.3.6, 2013-03-22
- bugfix: unitialized variable could lead to rulebase load error
----------------------------------------------------------------------
Version 0.3.5 (rgerhards), 2012-09-18
- renamed "normalizer" tool to "lognormalizer" to solve name clashes
Thanks to the Fedora folks for pointing this out.
----------------------------------------------------------------------
Version 0.3.4 (rgerhards), 2012-04-16
- bugfix: normalizer tool had a memory leak
Thanks to Brian Know for alerting me and helping to debug
----------------------------------------------------------------------
Version 0.3.3 (rgerhards), 2012-02-06
- required header file was not installed, resulting in compile error
closes: http://bugzilla.adiscon.com/show_bug.cgi?id=307
Thanks to Andreis Vinogradovs for alerting us on this bug.
----------------------------------------------------------------------
Version 0.3.2 (rgerhards), 2011-11-21
- added rfc5424 parser (requires libee >= 0.3.2)
- added "-" to serve as name for filler fields. Value is extracted,
but no field is written
- special handling for iptables log via %iptables% parser added
(currently experimental pending practical verification)
- normalizer tool on its way to a full-blow stand-alone tool
- support for annotations added, for the time being see
http://blog.gerhards.net/2011/11/log-annotation-with-liblognorm.html
----------------------------------------------------------------------
Version 0.3.1 (rgerhards), 2011-04-18
- added -t option to normalizer so that only messages with a
specified tag will be output
- bugfix: abort if a tag was assigned to a message without any
fields parsed out (uncommon scenario)
- bugfix: mem leak on parse tree destruct -- associated tags were
not deleted
- bugfix: potential abort in normalizer due to misadressing in debug
message generation
----------------------------------------------------------------------
Version 0.3.0 (rgerhards), 2011-04-06
- support for message classification via tags added
- bugfix: partial messages were invalidly matched
closes: http://bugzilla.adiscon.com/show_bug.cgi?id=247
----------------------------------------------------------------------
Version 0.2.0 (rgerhards), 2011-04-01
- added -E option to normalizer tool, permits to specify data for
encoders
- support for new libee parsers:
* Time12hr
* Time24hr
* ISODate
* QuotedString
- support for csv encoding added
- added -p option to normalizer tool (output only correctly parsed
entries)
- bugfix: segfault if a parse tree prefix had exactly buffer size,
in which case it was invalidly assumed that an external buffer had
been allocated
----------------------------------------------------------------------
Version 0.1.0 (rgerhards), 2010-12-09
Initial public release.
liblognorm-2.0.6/ltmain.sh 0000644 0001750 0001750 00001171474 13370251147 012446 0000000 0000000 #! /bin/sh
## DO NOT EDIT - This file generated from ./build-aux/ltmain.in
## by inline-source v2014-01-03.01
# libtool (GNU libtool) 2.4.6
# Provide generalized library-building support services.
# Written by Gordon Matzigkeit , 1996
# Copyright (C) 1996-2015 Free Software Foundation, Inc.
# This is free software; see the source for copying conditions. There is NO
# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# GNU Libtool is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# As a special exception to the GNU General Public License,
# if you distribute this file as part of a program or library that
# is built using GNU Libtool, you may include this file under the
# same distribution terms that you use for the rest of that program.
#
# GNU Libtool is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
PROGRAM=libtool
PACKAGE=libtool
VERSION="2.4.6 Debian-2.4.6-2"
package_revision=2.4.6
## ------ ##
## Usage. ##
## ------ ##
# Run './libtool --help' for help with using this script from the
# command line.
## ------------------------------- ##
## User overridable command paths. ##
## ------------------------------- ##
# After configure completes, it has a better idea of some of the
# shell tools we need than the defaults used by the functions shared
# with bootstrap, so set those here where they can still be over-
# ridden by the user, but otherwise take precedence.
: ${AUTOCONF="autoconf"}
: ${AUTOMAKE="automake"}
## -------------------------- ##
## Source external libraries. ##
## -------------------------- ##
# Much of our low-level functionality needs to be sourced from external
# libraries, which are installed to $pkgauxdir.
# Set a version string for this script.
scriptversion=2015-01-20.17; # UTC
# General shell script boiler plate, and helper functions.
# Written by Gary V. Vaughan, 2004
# Copyright (C) 2004-2015 Free Software Foundation, Inc.
# This is free software; see the source for copying conditions. There is NO
# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
# As a special exception to the GNU General Public License, if you distribute
# this file as part of a program or library that is built using GNU Libtool,
# you may include this file under the same distribution terms that you use
# for the rest of that program.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNES FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# Please report bugs or propose patches to gary@gnu.org.
## ------ ##
## Usage. ##
## ------ ##
# Evaluate this file near the top of your script to gain access to
# the functions and variables defined here:
#
# . `echo "$0" | ${SED-sed} 's|[^/]*$||'`/build-aux/funclib.sh
#
# If you need to override any of the default environment variable
# settings, do that before evaluating this file.
## -------------------- ##
## Shell normalisation. ##
## -------------------- ##
# Some shells need a little help to be as Bourne compatible as possible.
# Before doing anything else, make sure all that help has been provided!
DUALCASE=1; export DUALCASE # for MKS sh
if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then :
emulate sh
NULLCMD=:
# Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which
# is contrary to our usage. Disable this feature.
alias -g '${1+"$@"}'='"$@"'
setopt NO_GLOB_SUBST
else
case `(set -o) 2>/dev/null` in *posix*) set -o posix ;; esac
fi
# NLS nuisances: We save the old values in case they are required later.
_G_user_locale=
_G_safe_locale=
for _G_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES
do
eval "if test set = \"\${$_G_var+set}\"; then
save_$_G_var=\$$_G_var
$_G_var=C
export $_G_var
_G_user_locale=\"$_G_var=\\\$save_\$_G_var; \$_G_user_locale\"
_G_safe_locale=\"$_G_var=C; \$_G_safe_locale\"
fi"
done
# CDPATH.
(unset CDPATH) >/dev/null 2>&1 && unset CDPATH
# Make sure IFS has a sensible default
sp=' '
nl='
'
IFS="$sp $nl"
# There are apparently some retarded systems that use ';' as a PATH separator!
if test "${PATH_SEPARATOR+set}" != set; then
PATH_SEPARATOR=:
(PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && {
(PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 ||
PATH_SEPARATOR=';'
}
fi
## ------------------------- ##
## Locate command utilities. ##
## ------------------------- ##
# func_executable_p FILE
# ----------------------
# Check that FILE is an executable regular file.
func_executable_p ()
{
test -f "$1" && test -x "$1"
}
# func_path_progs PROGS_LIST CHECK_FUNC [PATH]
# --------------------------------------------
# Search for either a program that responds to --version with output
# containing "GNU", or else returned by CHECK_FUNC otherwise, by
# trying all the directories in PATH with each of the elements of
# PROGS_LIST.
#
# CHECK_FUNC should accept the path to a candidate program, and
# set $func_check_prog_result if it truncates its output less than
# $_G_path_prog_max characters.
func_path_progs ()
{
_G_progs_list=$1
_G_check_func=$2
_G_PATH=${3-"$PATH"}
_G_path_prog_max=0
_G_path_prog_found=false
_G_save_IFS=$IFS; IFS=${PATH_SEPARATOR-:}
for _G_dir in $_G_PATH; do
IFS=$_G_save_IFS
test -z "$_G_dir" && _G_dir=.
for _G_prog_name in $_G_progs_list; do
for _exeext in '' .EXE; do
_G_path_prog=$_G_dir/$_G_prog_name$_exeext
func_executable_p "$_G_path_prog" || continue
case `"$_G_path_prog" --version 2>&1` in
*GNU*) func_path_progs_result=$_G_path_prog _G_path_prog_found=: ;;
*) $_G_check_func $_G_path_prog
func_path_progs_result=$func_check_prog_result
;;
esac
$_G_path_prog_found && break 3
done
done
done
IFS=$_G_save_IFS
test -z "$func_path_progs_result" && {
echo "no acceptable sed could be found in \$PATH" >&2
exit 1
}
}
# We want to be able to use the functions in this file before configure
# has figured out where the best binaries are kept, which means we have
# to search for them ourselves - except when the results are already set
# where we skip the searches.
# Unless the user overrides by setting SED, search the path for either GNU
# sed, or the sed that truncates its output the least.
test -z "$SED" && {
_G_sed_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/
for _G_i in 1 2 3 4 5 6 7; do
_G_sed_script=$_G_sed_script$nl$_G_sed_script
done
echo "$_G_sed_script" 2>/dev/null | sed 99q >conftest.sed
_G_sed_script=
func_check_prog_sed ()
{
_G_path_prog=$1
_G_count=0
printf 0123456789 >conftest.in
while :
do
cat conftest.in conftest.in >conftest.tmp
mv conftest.tmp conftest.in
cp conftest.in conftest.nl
echo '' >> conftest.nl
"$_G_path_prog" -f conftest.sed conftest.out 2>/dev/null || break
diff conftest.out conftest.nl >/dev/null 2>&1 || break
_G_count=`expr $_G_count + 1`
if test "$_G_count" -gt "$_G_path_prog_max"; then
# Best one so far, save it but keep looking for a better one
func_check_prog_result=$_G_path_prog
_G_path_prog_max=$_G_count
fi
# 10*(2^10) chars as input seems more than enough
test 10 -lt "$_G_count" && break
done
rm -f conftest.in conftest.tmp conftest.nl conftest.out
}
func_path_progs "sed gsed" func_check_prog_sed $PATH:/usr/xpg4/bin
rm -f conftest.sed
SED=$func_path_progs_result
}
# Unless the user overrides by setting GREP, search the path for either GNU
# grep, or the grep that truncates its output the least.
test -z "$GREP" && {
func_check_prog_grep ()
{
_G_path_prog=$1
_G_count=0
_G_path_prog_max=0
printf 0123456789 >conftest.in
while :
do
cat conftest.in conftest.in >conftest.tmp
mv conftest.tmp conftest.in
cp conftest.in conftest.nl
echo 'GREP' >> conftest.nl
"$_G_path_prog" -e 'GREP$' -e '-(cannot match)-' conftest.out 2>/dev/null || break
diff conftest.out conftest.nl >/dev/null 2>&1 || break
_G_count=`expr $_G_count + 1`
if test "$_G_count" -gt "$_G_path_prog_max"; then
# Best one so far, save it but keep looking for a better one
func_check_prog_result=$_G_path_prog
_G_path_prog_max=$_G_count
fi
# 10*(2^10) chars as input seems more than enough
test 10 -lt "$_G_count" && break
done
rm -f conftest.in conftest.tmp conftest.nl conftest.out
}
func_path_progs "grep ggrep" func_check_prog_grep $PATH:/usr/xpg4/bin
GREP=$func_path_progs_result
}
## ------------------------------- ##
## User overridable command paths. ##
## ------------------------------- ##
# All uppercase variable names are used for environment variables. These
# variables can be overridden by the user before calling a script that
# uses them if a suitable command of that name is not already available
# in the command search PATH.
: ${CP="cp -f"}
: ${ECHO="printf %s\n"}
: ${EGREP="$GREP -E"}
: ${FGREP="$GREP -F"}
: ${LN_S="ln -s"}
: ${MAKE="make"}
: ${MKDIR="mkdir"}
: ${MV="mv -f"}
: ${RM="rm -f"}
: ${SHELL="${CONFIG_SHELL-/bin/sh}"}
## -------------------- ##
## Useful sed snippets. ##
## -------------------- ##
sed_dirname='s|/[^/]*$||'
sed_basename='s|^.*/||'
# Sed substitution that helps us do robust quoting. It backslashifies
# metacharacters that are still active within double-quoted strings.
sed_quote_subst='s|\([`"$\\]\)|\\\1|g'
# Same as above, but do not quote variable references.
sed_double_quote_subst='s/\(["`\\]\)/\\\1/g'
# Sed substitution that turns a string into a regex matching for the
# string literally.
sed_make_literal_regex='s|[].[^$\\*\/]|\\&|g'
# Sed substitution that converts a w32 file name or path
# that contains forward slashes, into one that contains
# (escaped) backslashes. A very naive implementation.
sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
# Re-'\' parameter expansions in output of sed_double_quote_subst that
# were '\'-ed in input to the same. If an odd number of '\' preceded a
# '$' in input to sed_double_quote_subst, that '$' was protected from
# expansion. Since each input '\' is now two '\'s, look for any number
# of runs of four '\'s followed by two '\'s and then a '$'. '\' that '$'.
_G_bs='\\'
_G_bs2='\\\\'
_G_bs4='\\\\\\\\'
_G_dollar='\$'
sed_double_backslash="\
s/$_G_bs4/&\\
/g
s/^$_G_bs2$_G_dollar/$_G_bs&/
s/\\([^$_G_bs]\\)$_G_bs2$_G_dollar/\\1$_G_bs2$_G_bs$_G_dollar/g
s/\n//g"
## ----------------- ##
## Global variables. ##
## ----------------- ##
# Except for the global variables explicitly listed below, the following
# functions in the '^func_' namespace, and the '^require_' namespace
# variables initialised in the 'Resource management' section, sourcing
# this file will not pollute your global namespace with anything
# else. There's no portable way to scope variables in Bourne shell
# though, so actually running these functions will sometimes place
# results into a variable named after the function, and often use
# temporary variables in the '^_G_' namespace. If you are careful to
# avoid using those namespaces casually in your sourcing script, things
# should continue to work as you expect. And, of course, you can freely
# overwrite any of the functions or variables defined here before
# calling anything to customize them.
EXIT_SUCCESS=0
EXIT_FAILURE=1
EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing.
EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake.
# Allow overriding, eg assuming that you follow the convention of
# putting '$debug_cmd' at the start of all your functions, you can get
# bash to show function call trace with:
#
# debug_cmd='eval echo "${FUNCNAME[0]} $*" >&2' bash your-script-name
debug_cmd=${debug_cmd-":"}
exit_cmd=:
# By convention, finish your script with:
#
# exit $exit_status
#
# so that you can set exit_status to non-zero if you want to indicate
# something went wrong during execution without actually bailing out at
# the point of failure.
exit_status=$EXIT_SUCCESS
# Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh
# is ksh but when the shell is invoked as "sh" and the current value of
# the _XPG environment variable is not equal to 1 (one), the special
# positional parameter $0, within a function call, is the name of the
# function.
progpath=$0
# The name of this program.
progname=`$ECHO "$progpath" |$SED "$sed_basename"`
# Make sure we have an absolute progpath for reexecution:
case $progpath in
[\\/]*|[A-Za-z]:\\*) ;;
*[\\/]*)
progdir=`$ECHO "$progpath" |$SED "$sed_dirname"`
progdir=`cd "$progdir" && pwd`
progpath=$progdir/$progname
;;
*)
_G_IFS=$IFS
IFS=${PATH_SEPARATOR-:}
for progdir in $PATH; do
IFS=$_G_IFS
test -x "$progdir/$progname" && break
done
IFS=$_G_IFS
test -n "$progdir" || progdir=`pwd`
progpath=$progdir/$progname
;;
esac
## ----------------- ##
## Standard options. ##
## ----------------- ##
# The following options affect the operation of the functions defined
# below, and should be set appropriately depending on run-time para-
# meters passed on the command line.
opt_dry_run=false
opt_quiet=false
opt_verbose=false
# Categories 'all' and 'none' are always available. Append any others
# you will pass as the first argument to func_warning from your own
# code.
warning_categories=
# By default, display warnings according to 'opt_warning_types'. Set
# 'warning_func' to ':' to elide all warnings, or func_fatal_error to
# treat the next displayed warning as a fatal error.
warning_func=func_warn_and_continue
# Set to 'all' to display all warnings, 'none' to suppress all
# warnings, or a space delimited list of some subset of
# 'warning_categories' to display only the listed warnings.
opt_warning_types=all
## -------------------- ##
## Resource management. ##
## -------------------- ##
# This section contains definitions for functions that each ensure a
# particular resource (a file, or a non-empty configuration variable for
# example) is available, and if appropriate to extract default values
# from pertinent package files. Call them using their associated
# 'require_*' variable to ensure that they are executed, at most, once.
#
# It's entirely deliberate that calling these functions can set
# variables that don't obey the namespace limitations obeyed by the rest
# of this file, in order that that they be as useful as possible to
# callers.
# require_term_colors
# -------------------
# Allow display of bold text on terminals that support it.
require_term_colors=func_require_term_colors
func_require_term_colors ()
{
$debug_cmd
test -t 1 && {
# COLORTERM and USE_ANSI_COLORS environment variables take
# precedence, because most terminfo databases neglect to describe
# whether color sequences are supported.
test -n "${COLORTERM+set}" && : ${USE_ANSI_COLORS="1"}
if test 1 = "$USE_ANSI_COLORS"; then
# Standard ANSI escape sequences
tc_reset='[0m'
tc_bold='[1m'; tc_standout='[7m'
tc_red='[31m'; tc_green='[32m'
tc_blue='[34m'; tc_cyan='[36m'
else
# Otherwise trust the terminfo database after all.
test -n "`tput sgr0 2>/dev/null`" && {
tc_reset=`tput sgr0`
test -n "`tput bold 2>/dev/null`" && tc_bold=`tput bold`
tc_standout=$tc_bold
test -n "`tput smso 2>/dev/null`" && tc_standout=`tput smso`
test -n "`tput setaf 1 2>/dev/null`" && tc_red=`tput setaf 1`
test -n "`tput setaf 2 2>/dev/null`" && tc_green=`tput setaf 2`
test -n "`tput setaf 4 2>/dev/null`" && tc_blue=`tput setaf 4`
test -n "`tput setaf 5 2>/dev/null`" && tc_cyan=`tput setaf 5`
}
fi
}
require_term_colors=:
}
## ----------------- ##
## Function library. ##
## ----------------- ##
# This section contains a variety of useful functions to call in your
# scripts. Take note of the portable wrappers for features provided by
# some modern shells, which will fall back to slower equivalents on
# less featureful shells.
# func_append VAR VALUE
# ---------------------
# Append VALUE onto the existing contents of VAR.
# We should try to minimise forks, especially on Windows where they are
# unreasonably slow, so skip the feature probes when bash or zsh are
# being used:
if test set = "${BASH_VERSION+set}${ZSH_VERSION+set}"; then
: ${_G_HAVE_ARITH_OP="yes"}
: ${_G_HAVE_XSI_OPS="yes"}
# The += operator was introduced in bash 3.1
case $BASH_VERSION in
[12].* | 3.0 | 3.0*) ;;
*)
: ${_G_HAVE_PLUSEQ_OP="yes"}
;;
esac
fi
# _G_HAVE_PLUSEQ_OP
# Can be empty, in which case the shell is probed, "yes" if += is
# useable or anything else if it does not work.
test -z "$_G_HAVE_PLUSEQ_OP" \
&& (eval 'x=a; x+=" b"; test "a b" = "$x"') 2>/dev/null \
&& _G_HAVE_PLUSEQ_OP=yes
if test yes = "$_G_HAVE_PLUSEQ_OP"
then
# This is an XSI compatible shell, allowing a faster implementation...
eval 'func_append ()
{
$debug_cmd
eval "$1+=\$2"
}'
else
# ...otherwise fall back to using expr, which is often a shell builtin.
func_append ()
{
$debug_cmd
eval "$1=\$$1\$2"
}
fi
# func_append_quoted VAR VALUE
# ----------------------------
# Quote VALUE and append to the end of shell variable VAR, separated
# by a space.
if test yes = "$_G_HAVE_PLUSEQ_OP"; then
eval 'func_append_quoted ()
{
$debug_cmd
func_quote_for_eval "$2"
eval "$1+=\\ \$func_quote_for_eval_result"
}'
else
func_append_quoted ()
{
$debug_cmd
func_quote_for_eval "$2"
eval "$1=\$$1\\ \$func_quote_for_eval_result"
}
fi
# func_append_uniq VAR VALUE
# --------------------------
# Append unique VALUE onto the existing contents of VAR, assuming
# entries are delimited by the first character of VALUE. For example:
#
# func_append_uniq options " --another-option option-argument"
#
# will only append to $options if " --another-option option-argument "
# is not already present somewhere in $options already (note spaces at
# each end implied by leading space in second argument).
func_append_uniq ()
{
$debug_cmd
eval _G_current_value='`$ECHO $'$1'`'
_G_delim=`expr "$2" : '\(.\)'`
case $_G_delim$_G_current_value$_G_delim in
*"$2$_G_delim"*) ;;
*) func_append "$@" ;;
esac
}
# func_arith TERM...
# ------------------
# Set func_arith_result to the result of evaluating TERMs.
test -z "$_G_HAVE_ARITH_OP" \
&& (eval 'test 2 = $(( 1 + 1 ))') 2>/dev/null \
&& _G_HAVE_ARITH_OP=yes
if test yes = "$_G_HAVE_ARITH_OP"; then
eval 'func_arith ()
{
$debug_cmd
func_arith_result=$(( $* ))
}'
else
func_arith ()
{
$debug_cmd
func_arith_result=`expr "$@"`
}
fi
# func_basename FILE
# ------------------
# Set func_basename_result to FILE with everything up to and including
# the last / stripped.
if test yes = "$_G_HAVE_XSI_OPS"; then
# If this shell supports suffix pattern removal, then use it to avoid
# forking. Hide the definitions single quotes in case the shell chokes
# on unsupported syntax...
_b='func_basename_result=${1##*/}'
_d='case $1 in
*/*) func_dirname_result=${1%/*}$2 ;;
* ) func_dirname_result=$3 ;;
esac'
else
# ...otherwise fall back to using sed.
_b='func_basename_result=`$ECHO "$1" |$SED "$sed_basename"`'
_d='func_dirname_result=`$ECHO "$1" |$SED "$sed_dirname"`
if test "X$func_dirname_result" = "X$1"; then
func_dirname_result=$3
else
func_append func_dirname_result "$2"
fi'
fi
eval 'func_basename ()
{
$debug_cmd
'"$_b"'
}'
# func_dirname FILE APPEND NONDIR_REPLACEMENT
# -------------------------------------------
# Compute the dirname of FILE. If nonempty, add APPEND to the result,
# otherwise set result to NONDIR_REPLACEMENT.
eval 'func_dirname ()
{
$debug_cmd
'"$_d"'
}'
# func_dirname_and_basename FILE APPEND NONDIR_REPLACEMENT
# --------------------------------------------------------
# Perform func_basename and func_dirname in a single function
# call:
# dirname: Compute the dirname of FILE. If nonempty,
# add APPEND to the result, otherwise set result
# to NONDIR_REPLACEMENT.
# value returned in "$func_dirname_result"
# basename: Compute filename of FILE.
# value retuned in "$func_basename_result"
# For efficiency, we do not delegate to the functions above but instead
# duplicate the functionality here.
eval 'func_dirname_and_basename ()
{
$debug_cmd
'"$_b"'
'"$_d"'
}'
# func_echo ARG...
# ----------------
# Echo program name prefixed message.
func_echo ()
{
$debug_cmd
_G_message=$*
func_echo_IFS=$IFS
IFS=$nl
for _G_line in $_G_message; do
IFS=$func_echo_IFS
$ECHO "$progname: $_G_line"
done
IFS=$func_echo_IFS
}
# func_echo_all ARG...
# --------------------
# Invoke $ECHO with all args, space-separated.
func_echo_all ()
{
$ECHO "$*"
}
# func_echo_infix_1 INFIX ARG...
# ------------------------------
# Echo program name, followed by INFIX on the first line, with any
# additional lines not showing INFIX.
func_echo_infix_1 ()
{
$debug_cmd
$require_term_colors
_G_infix=$1; shift
_G_indent=$_G_infix
_G_prefix="$progname: $_G_infix: "
_G_message=$*
# Strip color escape sequences before counting printable length
for _G_tc in "$tc_reset" "$tc_bold" "$tc_standout" "$tc_red" "$tc_green" "$tc_blue" "$tc_cyan"
do
test -n "$_G_tc" && {
_G_esc_tc=`$ECHO "$_G_tc" | $SED "$sed_make_literal_regex"`
_G_indent=`$ECHO "$_G_indent" | $SED "s|$_G_esc_tc||g"`
}
done
_G_indent="$progname: "`echo "$_G_indent" | $SED 's|.| |g'`" " ## exclude from sc_prohibit_nested_quotes
func_echo_infix_1_IFS=$IFS
IFS=$nl
for _G_line in $_G_message; do
IFS=$func_echo_infix_1_IFS
$ECHO "$_G_prefix$tc_bold$_G_line$tc_reset" >&2
_G_prefix=$_G_indent
done
IFS=$func_echo_infix_1_IFS
}
# func_error ARG...
# -----------------
# Echo program name prefixed message to standard error.
func_error ()
{
$debug_cmd
$require_term_colors
func_echo_infix_1 " $tc_standout${tc_red}error$tc_reset" "$*" >&2
}
# func_fatal_error ARG...
# -----------------------
# Echo program name prefixed message to standard error, and exit.
func_fatal_error ()
{
$debug_cmd
func_error "$*"
exit $EXIT_FAILURE
}
# func_grep EXPRESSION FILENAME
# -----------------------------
# Check whether EXPRESSION matches any line of FILENAME, without output.
func_grep ()
{
$debug_cmd
$GREP "$1" "$2" >/dev/null 2>&1
}
# func_len STRING
# ---------------
# Set func_len_result to the length of STRING. STRING may not
# start with a hyphen.
test -z "$_G_HAVE_XSI_OPS" \
&& (eval 'x=a/b/c;
test 5aa/bb/cc = "${#x}${x%%/*}${x%/*}${x#*/}${x##*/}"') 2>/dev/null \
&& _G_HAVE_XSI_OPS=yes
if test yes = "$_G_HAVE_XSI_OPS"; then
eval 'func_len ()
{
$debug_cmd
func_len_result=${#1}
}'
else
func_len ()
{
$debug_cmd
func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
}
fi
# func_mkdir_p DIRECTORY-PATH
# ---------------------------
# Make sure the entire path to DIRECTORY-PATH is available.
func_mkdir_p ()
{
$debug_cmd
_G_directory_path=$1
_G_dir_list=
if test -n "$_G_directory_path" && test : != "$opt_dry_run"; then
# Protect directory names starting with '-'
case $_G_directory_path in
-*) _G_directory_path=./$_G_directory_path ;;
esac
# While some portion of DIR does not yet exist...
while test ! -d "$_G_directory_path"; do
# ...make a list in topmost first order. Use a colon delimited
# list incase some portion of path contains whitespace.
_G_dir_list=$_G_directory_path:$_G_dir_list
# If the last portion added has no slash in it, the list is done
case $_G_directory_path in */*) ;; *) break ;; esac
# ...otherwise throw away the child directory and loop
_G_directory_path=`$ECHO "$_G_directory_path" | $SED -e "$sed_dirname"`
done
_G_dir_list=`$ECHO "$_G_dir_list" | $SED 's|:*$||'`
func_mkdir_p_IFS=$IFS; IFS=:
for _G_dir in $_G_dir_list; do
IFS=$func_mkdir_p_IFS
# mkdir can fail with a 'File exist' error if two processes
# try to create one of the directories concurrently. Don't
# stop in that case!
$MKDIR "$_G_dir" 2>/dev/null || :
done
IFS=$func_mkdir_p_IFS
# Bail out if we (or some other process) failed to create a directory.
test -d "$_G_directory_path" || \
func_fatal_error "Failed to create '$1'"
fi
}
# func_mktempdir [BASENAME]
# -------------------------
# Make a temporary directory that won't clash with other running
# libtool processes, and avoids race conditions if possible. If
# given, BASENAME is the basename for that directory.
func_mktempdir ()
{
$debug_cmd
_G_template=${TMPDIR-/tmp}/${1-$progname}
if test : = "$opt_dry_run"; then
# Return a directory name, but don't create it in dry-run mode
_G_tmpdir=$_G_template-$$
else
# If mktemp works, use that first and foremost
_G_tmpdir=`mktemp -d "$_G_template-XXXXXXXX" 2>/dev/null`
if test ! -d "$_G_tmpdir"; then
# Failing that, at least try and use $RANDOM to avoid a race
_G_tmpdir=$_G_template-${RANDOM-0}$$
func_mktempdir_umask=`umask`
umask 0077
$MKDIR "$_G_tmpdir"
umask $func_mktempdir_umask
fi
# If we're not in dry-run mode, bomb out on failure
test -d "$_G_tmpdir" || \
func_fatal_error "cannot create temporary directory '$_G_tmpdir'"
fi
$ECHO "$_G_tmpdir"
}
# func_normal_abspath PATH
# ------------------------
# Remove doubled-up and trailing slashes, "." path components,
# and cancel out any ".." path components in PATH after making
# it an absolute path.
func_normal_abspath ()
{
$debug_cmd
# These SED scripts presuppose an absolute path with a trailing slash.
_G_pathcar='s|^/\([^/]*\).*$|\1|'
_G_pathcdr='s|^/[^/]*||'
_G_removedotparts=':dotsl
s|/\./|/|g
t dotsl
s|/\.$|/|'
_G_collapseslashes='s|/\{1,\}|/|g'
_G_finalslash='s|/*$|/|'
# Start from root dir and reassemble the path.
func_normal_abspath_result=
func_normal_abspath_tpath=$1
func_normal_abspath_altnamespace=
case $func_normal_abspath_tpath in
"")
# Empty path, that just means $cwd.
func_stripname '' '/' "`pwd`"
func_normal_abspath_result=$func_stripname_result
return
;;
# The next three entries are used to spot a run of precisely
# two leading slashes without using negated character classes;
# we take advantage of case's first-match behaviour.
///*)
# Unusual form of absolute path, do nothing.
;;
//*)
# Not necessarily an ordinary path; POSIX reserves leading '//'
# and for example Cygwin uses it to access remote file shares
# over CIFS/SMB, so we conserve a leading double slash if found.
func_normal_abspath_altnamespace=/
;;
/*)
# Absolute path, do nothing.
;;
*)
# Relative path, prepend $cwd.
func_normal_abspath_tpath=`pwd`/$func_normal_abspath_tpath
;;
esac
# Cancel out all the simple stuff to save iterations. We also want
# the path to end with a slash for ease of parsing, so make sure
# there is one (and only one) here.
func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \
-e "$_G_removedotparts" -e "$_G_collapseslashes" -e "$_G_finalslash"`
while :; do
# Processed it all yet?
if test / = "$func_normal_abspath_tpath"; then
# If we ascended to the root using ".." the result may be empty now.
if test -z "$func_normal_abspath_result"; then
func_normal_abspath_result=/
fi
break
fi
func_normal_abspath_tcomponent=`$ECHO "$func_normal_abspath_tpath" | $SED \
-e "$_G_pathcar"`
func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \
-e "$_G_pathcdr"`
# Figure out what to do with it
case $func_normal_abspath_tcomponent in
"")
# Trailing empty path component, ignore it.
;;
..)
# Parent dir; strip last assembled component from result.
func_dirname "$func_normal_abspath_result"
func_normal_abspath_result=$func_dirname_result
;;
*)
# Actual path component, append it.
func_append func_normal_abspath_result "/$func_normal_abspath_tcomponent"
;;
esac
done
# Restore leading double-slash if one was found on entry.
func_normal_abspath_result=$func_normal_abspath_altnamespace$func_normal_abspath_result
}
# func_notquiet ARG...
# --------------------
# Echo program name prefixed message only when not in quiet mode.
func_notquiet ()
{
$debug_cmd
$opt_quiet || func_echo ${1+"$@"}
# A bug in bash halts the script if the last line of a function
# fails when set -e is in force, so we need another command to
# work around that:
:
}
# func_relative_path SRCDIR DSTDIR
# --------------------------------
# Set func_relative_path_result to the relative path from SRCDIR to DSTDIR.
func_relative_path ()
{
$debug_cmd
func_relative_path_result=
func_normal_abspath "$1"
func_relative_path_tlibdir=$func_normal_abspath_result
func_normal_abspath "$2"
func_relative_path_tbindir=$func_normal_abspath_result
# Ascend the tree starting from libdir
while :; do
# check if we have found a prefix of bindir
case $func_relative_path_tbindir in
$func_relative_path_tlibdir)
# found an exact match
func_relative_path_tcancelled=
break
;;
$func_relative_path_tlibdir*)
# found a matching prefix
func_stripname "$func_relative_path_tlibdir" '' "$func_relative_path_tbindir"
func_relative_path_tcancelled=$func_stripname_result
if test -z "$func_relative_path_result"; then
func_relative_path_result=.
fi
break
;;
*)
func_dirname $func_relative_path_tlibdir
func_relative_path_tlibdir=$func_dirname_result
if test -z "$func_relative_path_tlibdir"; then
# Have to descend all the way to the root!
func_relative_path_result=../$func_relative_path_result
func_relative_path_tcancelled=$func_relative_path_tbindir
break
fi
func_relative_path_result=../$func_relative_path_result
;;
esac
done
# Now calculate path; take care to avoid doubling-up slashes.
func_stripname '' '/' "$func_relative_path_result"
func_relative_path_result=$func_stripname_result
func_stripname '/' '/' "$func_relative_path_tcancelled"
if test -n "$func_stripname_result"; then
func_append func_relative_path_result "/$func_stripname_result"
fi
# Normalisation. If bindir is libdir, return '.' else relative path.
if test -n "$func_relative_path_result"; then
func_stripname './' '' "$func_relative_path_result"
func_relative_path_result=$func_stripname_result
fi
test -n "$func_relative_path_result" || func_relative_path_result=.
:
}
# func_quote_for_eval ARG...
# --------------------------
# Aesthetically quote ARGs to be evaled later.
# This function returns two values:
# i) func_quote_for_eval_result
# double-quoted, suitable for a subsequent eval
# ii) func_quote_for_eval_unquoted_result
# has all characters that are still active within double
# quotes backslashified.
func_quote_for_eval ()
{
$debug_cmd
func_quote_for_eval_unquoted_result=
func_quote_for_eval_result=
while test 0 -lt $#; do
case $1 in
*[\\\`\"\$]*)
_G_unquoted_arg=`printf '%s\n' "$1" |$SED "$sed_quote_subst"` ;;
*)
_G_unquoted_arg=$1 ;;
esac
if test -n "$func_quote_for_eval_unquoted_result"; then
func_append func_quote_for_eval_unquoted_result " $_G_unquoted_arg"
else
func_append func_quote_for_eval_unquoted_result "$_G_unquoted_arg"
fi
case $_G_unquoted_arg in
# Double-quote args containing shell metacharacters to delay
# word splitting, command substitution and variable expansion
# for a subsequent eval.
# Many Bourne shells cannot handle close brackets correctly
# in scan sets, so we specify it separately.
*[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
_G_quoted_arg=\"$_G_unquoted_arg\"
;;
*)
_G_quoted_arg=$_G_unquoted_arg
;;
esac
if test -n "$func_quote_for_eval_result"; then
func_append func_quote_for_eval_result " $_G_quoted_arg"
else
func_append func_quote_for_eval_result "$_G_quoted_arg"
fi
shift
done
}
# func_quote_for_expand ARG
# -------------------------
# Aesthetically quote ARG to be evaled later; same as above,
# but do not quote variable references.
func_quote_for_expand ()
{
$debug_cmd
case $1 in
*[\\\`\"]*)
_G_arg=`$ECHO "$1" | $SED \
-e "$sed_double_quote_subst" -e "$sed_double_backslash"` ;;
*)
_G_arg=$1 ;;
esac
case $_G_arg in
# Double-quote args containing shell metacharacters to delay
# word splitting and command substitution for a subsequent eval.
# Many Bourne shells cannot handle close brackets correctly
# in scan sets, so we specify it separately.
*[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"")
_G_arg=\"$_G_arg\"
;;
esac
func_quote_for_expand_result=$_G_arg
}
# func_stripname PREFIX SUFFIX NAME
# ---------------------------------
# strip PREFIX and SUFFIX from NAME, and store in func_stripname_result.
# PREFIX and SUFFIX must not contain globbing or regex special
# characters, hashes, percent signs, but SUFFIX may contain a leading
# dot (in which case that matches only a dot).
if test yes = "$_G_HAVE_XSI_OPS"; then
eval 'func_stripname ()
{
$debug_cmd
# pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
# positional parameters, so assign one to ordinary variable first.
func_stripname_result=$3
func_stripname_result=${func_stripname_result#"$1"}
func_stripname_result=${func_stripname_result%"$2"}
}'
else
func_stripname ()
{
$debug_cmd
case $2 in
.*) func_stripname_result=`$ECHO "$3" | $SED -e "s%^$1%%" -e "s%\\\\$2\$%%"`;;
*) func_stripname_result=`$ECHO "$3" | $SED -e "s%^$1%%" -e "s%$2\$%%"`;;
esac
}
fi
# func_show_eval CMD [FAIL_EXP]
# -----------------------------
# Unless opt_quiet is true, then output CMD. Then, if opt_dryrun is
# not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP
# is given, then evaluate it.
func_show_eval ()
{
$debug_cmd
_G_cmd=$1
_G_fail_exp=${2-':'}
func_quote_for_expand "$_G_cmd"
eval "func_notquiet $func_quote_for_expand_result"
$opt_dry_run || {
eval "$_G_cmd"
_G_status=$?
if test 0 -ne "$_G_status"; then
eval "(exit $_G_status); $_G_fail_exp"
fi
}
}
# func_show_eval_locale CMD [FAIL_EXP]
# ------------------------------------
# Unless opt_quiet is true, then output CMD. Then, if opt_dryrun is
# not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP
# is given, then evaluate it. Use the saved locale for evaluation.
func_show_eval_locale ()
{
$debug_cmd
_G_cmd=$1
_G_fail_exp=${2-':'}
$opt_quiet || {
func_quote_for_expand "$_G_cmd"
eval "func_echo $func_quote_for_expand_result"
}
$opt_dry_run || {
eval "$_G_user_locale
$_G_cmd"
_G_status=$?
eval "$_G_safe_locale"
if test 0 -ne "$_G_status"; then
eval "(exit $_G_status); $_G_fail_exp"
fi
}
}
# func_tr_sh
# ----------
# Turn $1 into a string suitable for a shell variable name.
# Result is stored in $func_tr_sh_result. All characters
# not in the set a-zA-Z0-9_ are replaced with '_'. Further,
# if $1 begins with a digit, a '_' is prepended as well.
func_tr_sh ()
{
$debug_cmd
case $1 in
[0-9]* | *[!a-zA-Z0-9_]*)
func_tr_sh_result=`$ECHO "$1" | $SED -e 's/^\([0-9]\)/_\1/' -e 's/[^a-zA-Z0-9_]/_/g'`
;;
* )
func_tr_sh_result=$1
;;
esac
}
# func_verbose ARG...
# -------------------
# Echo program name prefixed message in verbose mode only.
func_verbose ()
{
$debug_cmd
$opt_verbose && func_echo "$*"
:
}
# func_warn_and_continue ARG...
# -----------------------------
# Echo program name prefixed warning message to standard error.
func_warn_and_continue ()
{
$debug_cmd
$require_term_colors
func_echo_infix_1 "${tc_red}warning$tc_reset" "$*" >&2
}
# func_warning CATEGORY ARG...
# ----------------------------
# Echo program name prefixed warning message to standard error. Warning
# messages can be filtered according to CATEGORY, where this function
# elides messages where CATEGORY is not listed in the global variable
# 'opt_warning_types'.
func_warning ()
{
$debug_cmd
# CATEGORY must be in the warning_categories list!
case " $warning_categories " in
*" $1 "*) ;;
*) func_internal_error "invalid warning category '$1'" ;;
esac
_G_category=$1
shift
case " $opt_warning_types " in
*" $_G_category "*) $warning_func ${1+"$@"} ;;
esac
}
# func_sort_ver VER1 VER2
# -----------------------
# 'sort -V' is not generally available.
# Note this deviates from the version comparison in automake
# in that it treats 1.5 < 1.5.0, and treats 1.4.4a < 1.4-p3a
# but this should suffice as we won't be specifying old
# version formats or redundant trailing .0 in bootstrap.conf.
# If we did want full compatibility then we should probably
# use m4_version_compare from autoconf.
func_sort_ver ()
{
$debug_cmd
printf '%s\n%s\n' "$1" "$2" \
| sort -t. -k 1,1n -k 2,2n -k 3,3n -k 4,4n -k 5,5n -k 6,6n -k 7,7n -k 8,8n -k 9,9n
}
# func_lt_ver PREV CURR
# ---------------------
# Return true if PREV and CURR are in the correct order according to
# func_sort_ver, otherwise false. Use it like this:
#
# func_lt_ver "$prev_ver" "$proposed_ver" || func_fatal_error "..."
func_lt_ver ()
{
$debug_cmd
test "x$1" = x`func_sort_ver "$1" "$2" | $SED 1q`
}
# Local variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'before-save-hook 'time-stamp)
# time-stamp-pattern: "10/scriptversion=%:y-%02m-%02d.%02H; # UTC"
# time-stamp-time-zone: "UTC"
# End:
#! /bin/sh
# Set a version string for this script.
scriptversion=2014-01-07.03; # UTC
# A portable, pluggable option parser for Bourne shell.
# Written by Gary V. Vaughan, 2010
# Copyright (C) 2010-2015 Free Software Foundation, Inc.
# This is free software; see the source for copying conditions. There is NO
# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# Please report bugs or propose patches to gary@gnu.org.
## ------ ##
## Usage. ##
## ------ ##
# This file is a library for parsing options in your shell scripts along
# with assorted other useful supporting features that you can make use
# of too.
#
# For the simplest scripts you might need only:
#
# #!/bin/sh
# . relative/path/to/funclib.sh
# . relative/path/to/options-parser
# scriptversion=1.0
# func_options ${1+"$@"}
# eval set dummy "$func_options_result"; shift
# ...rest of your script...
#
# In order for the '--version' option to work, you will need to have a
# suitably formatted comment like the one at the top of this file
# starting with '# Written by ' and ending with '# warranty; '.
#
# For '-h' and '--help' to work, you will also need a one line
# description of your script's purpose in a comment directly above the
# '# Written by ' line, like the one at the top of this file.
#
# The default options also support '--debug', which will turn on shell
# execution tracing (see the comment above debug_cmd below for another
# use), and '--verbose' and the func_verbose function to allow your script
# to display verbose messages only when your user has specified
# '--verbose'.
#
# After sourcing this file, you can plug processing for additional
# options by amending the variables from the 'Configuration' section
# below, and following the instructions in the 'Option parsing'
# section further down.
## -------------- ##
## Configuration. ##
## -------------- ##
# You should override these variables in your script after sourcing this
# file so that they reflect the customisations you have added to the
# option parser.
# The usage line for option parsing errors and the start of '-h' and
# '--help' output messages. You can embed shell variables for delayed
# expansion at the time the message is displayed, but you will need to
# quote other shell meta-characters carefully to prevent them being
# expanded when the contents are evaled.
usage='$progpath [OPTION]...'
# Short help message in response to '-h' and '--help'. Add to this or
# override it after sourcing this library to reflect the full set of
# options your script accepts.
usage_message="\
--debug enable verbose shell tracing
-W, --warnings=CATEGORY
report the warnings falling in CATEGORY [all]
-v, --verbose verbosely report processing
--version print version information and exit
-h, --help print short or long help message and exit
"
# Additional text appended to 'usage_message' in response to '--help'.
long_help_message="
Warning categories include:
'all' show all warnings
'none' turn off all the warnings
'error' warnings are treated as fatal errors"
# Help message printed before fatal option parsing errors.
fatal_help="Try '\$progname --help' for more information."
## ------------------------- ##
## Hook function management. ##
## ------------------------- ##
# This section contains functions for adding, removing, and running hooks
# to the main code. A hook is just a named list of of function, that can
# be run in order later on.
# func_hookable FUNC_NAME
# -----------------------
# Declare that FUNC_NAME will run hooks added with
# 'func_add_hook FUNC_NAME ...'.
func_hookable ()
{
$debug_cmd
func_append hookable_fns " $1"
}
# func_add_hook FUNC_NAME HOOK_FUNC
# ---------------------------------
# Request that FUNC_NAME call HOOK_FUNC before it returns. FUNC_NAME must
# first have been declared "hookable" by a call to 'func_hookable'.
func_add_hook ()
{
$debug_cmd
case " $hookable_fns " in
*" $1 "*) ;;
*) func_fatal_error "'$1' does not accept hook functions." ;;
esac
eval func_append ${1}_hooks '" $2"'
}
# func_remove_hook FUNC_NAME HOOK_FUNC
# ------------------------------------
# Remove HOOK_FUNC from the list of functions called by FUNC_NAME.
func_remove_hook ()
{
$debug_cmd
eval ${1}_hooks='`$ECHO "\$'$1'_hooks" |$SED "s| '$2'||"`'
}
# func_run_hooks FUNC_NAME [ARG]...
# ---------------------------------
# Run all hook functions registered to FUNC_NAME.
# It is assumed that the list of hook functions contains nothing more
# than a whitespace-delimited list of legal shell function names, and
# no effort is wasted trying to catch shell meta-characters or preserve
# whitespace.
func_run_hooks ()
{
$debug_cmd
case " $hookable_fns " in
*" $1 "*) ;;
*) func_fatal_error "'$1' does not support hook funcions.n" ;;
esac
eval _G_hook_fns=\$$1_hooks; shift
for _G_hook in $_G_hook_fns; do
eval $_G_hook '"$@"'
# store returned options list back into positional
# parameters for next 'cmd' execution.
eval _G_hook_result=\$${_G_hook}_result
eval set dummy "$_G_hook_result"; shift
done
func_quote_for_eval ${1+"$@"}
func_run_hooks_result=$func_quote_for_eval_result
}
## --------------- ##
## Option parsing. ##
## --------------- ##
# In order to add your own option parsing hooks, you must accept the
# full positional parameter list in your hook function, remove any
# options that you action, and then pass back the remaining unprocessed
# options in '_result', escaped suitably for
# 'eval'. Like this:
#
# my_options_prep ()
# {
# $debug_cmd
#
# # Extend the existing usage message.
# usage_message=$usage_message'
# -s, --silent don'\''t print informational messages
# '
#
# func_quote_for_eval ${1+"$@"}
# my_options_prep_result=$func_quote_for_eval_result
# }
# func_add_hook func_options_prep my_options_prep
#
#
# my_silent_option ()
# {
# $debug_cmd
#
# # Note that for efficiency, we parse as many options as we can
# # recognise in a loop before passing the remainder back to the
# # caller on the first unrecognised argument we encounter.
# while test $# -gt 0; do
# opt=$1; shift
# case $opt in
# --silent|-s) opt_silent=: ;;
# # Separate non-argument short options:
# -s*) func_split_short_opt "$_G_opt"
# set dummy "$func_split_short_opt_name" \
# "-$func_split_short_opt_arg" ${1+"$@"}
# shift
# ;;
# *) set dummy "$_G_opt" "$*"; shift; break ;;
# esac
# done
#
# func_quote_for_eval ${1+"$@"}
# my_silent_option_result=$func_quote_for_eval_result
# }
# func_add_hook func_parse_options my_silent_option
#
#
# my_option_validation ()
# {
# $debug_cmd
#
# $opt_silent && $opt_verbose && func_fatal_help "\
# '--silent' and '--verbose' options are mutually exclusive."
#
# func_quote_for_eval ${1+"$@"}
# my_option_validation_result=$func_quote_for_eval_result
# }
# func_add_hook func_validate_options my_option_validation
#
# You'll alse need to manually amend $usage_message to reflect the extra
# options you parse. It's preferable to append if you can, so that
# multiple option parsing hooks can be added safely.
# func_options [ARG]...
# ---------------------
# All the functions called inside func_options are hookable. See the
# individual implementations for details.
func_hookable func_options
func_options ()
{
$debug_cmd
func_options_prep ${1+"$@"}
eval func_parse_options \
${func_options_prep_result+"$func_options_prep_result"}
eval func_validate_options \
${func_parse_options_result+"$func_parse_options_result"}
eval func_run_hooks func_options \
${func_validate_options_result+"$func_validate_options_result"}
# save modified positional parameters for caller
func_options_result=$func_run_hooks_result
}
# func_options_prep [ARG]...
# --------------------------
# All initialisations required before starting the option parse loop.
# Note that when calling hook functions, we pass through the list of
# positional parameters. If a hook function modifies that list, and
# needs to propogate that back to rest of this script, then the complete
# modified list must be put in 'func_run_hooks_result' before
# returning.
func_hookable func_options_prep
func_options_prep ()
{
$debug_cmd
# Option defaults:
opt_verbose=false
opt_warning_types=
func_run_hooks func_options_prep ${1+"$@"}
# save modified positional parameters for caller
func_options_prep_result=$func_run_hooks_result
}
# func_parse_options [ARG]...
# ---------------------------
# The main option parsing loop.
func_hookable func_parse_options
func_parse_options ()
{
$debug_cmd
func_parse_options_result=
# this just eases exit handling
while test $# -gt 0; do
# Defer to hook functions for initial option parsing, so they
# get priority in the event of reusing an option name.
func_run_hooks func_parse_options ${1+"$@"}
# Adjust func_parse_options positional parameters to match
eval set dummy "$func_run_hooks_result"; shift
# Break out of the loop if we already parsed every option.
test $# -gt 0 || break
_G_opt=$1
shift
case $_G_opt in
--debug|-x) debug_cmd='set -x'
func_echo "enabling shell trace mode"
$debug_cmd
;;
--no-warnings|--no-warning|--no-warn)
set dummy --warnings none ${1+"$@"}
shift
;;
--warnings|--warning|-W)
test $# = 0 && func_missing_arg $_G_opt && break
case " $warning_categories $1" in
*" $1 "*)
# trailing space prevents matching last $1 above
func_append_uniq opt_warning_types " $1"
;;
*all)
opt_warning_types=$warning_categories
;;
*none)
opt_warning_types=none
warning_func=:
;;
*error)
opt_warning_types=$warning_categories
warning_func=func_fatal_error
;;
*)
func_fatal_error \
"unsupported warning category: '$1'"
;;
esac
shift
;;
--verbose|-v) opt_verbose=: ;;
--version) func_version ;;
-\?|-h) func_usage ;;
--help) func_help ;;
# Separate optargs to long options (plugins may need this):
--*=*) func_split_equals "$_G_opt"
set dummy "$func_split_equals_lhs" \
"$func_split_equals_rhs" ${1+"$@"}
shift
;;
# Separate optargs to short options:
-W*)
func_split_short_opt "$_G_opt"
set dummy "$func_split_short_opt_name" \
"$func_split_short_opt_arg" ${1+"$@"}
shift
;;
# Separate non-argument short options:
-\?*|-h*|-v*|-x*)
func_split_short_opt "$_G_opt"
set dummy "$func_split_short_opt_name" \
"-$func_split_short_opt_arg" ${1+"$@"}
shift
;;
--) break ;;
-*) func_fatal_help "unrecognised option: '$_G_opt'" ;;
*) set dummy "$_G_opt" ${1+"$@"}; shift; break ;;
esac
done
# save modified positional parameters for caller
func_quote_for_eval ${1+"$@"}
func_parse_options_result=$func_quote_for_eval_result
}
# func_validate_options [ARG]...
# ------------------------------
# Perform any sanity checks on option settings and/or unconsumed
# arguments.
func_hookable func_validate_options
func_validate_options ()
{
$debug_cmd
# Display all warnings if -W was not given.
test -n "$opt_warning_types" || opt_warning_types=" $warning_categories"
func_run_hooks func_validate_options ${1+"$@"}
# Bail if the options were screwed!
$exit_cmd $EXIT_FAILURE
# save modified positional parameters for caller
func_validate_options_result=$func_run_hooks_result
}
## ----------------- ##
## Helper functions. ##
## ----------------- ##
# This section contains the helper functions used by the rest of the
# hookable option parser framework in ascii-betical order.
# func_fatal_help ARG...
# ----------------------
# Echo program name prefixed message to standard error, followed by
# a help hint, and exit.
func_fatal_help ()
{
$debug_cmd
eval \$ECHO \""Usage: $usage"\"
eval \$ECHO \""$fatal_help"\"
func_error ${1+"$@"}
exit $EXIT_FAILURE
}
# func_help
# ---------
# Echo long help message to standard output and exit.
func_help ()
{
$debug_cmd
func_usage_message
$ECHO "$long_help_message"
exit 0
}
# func_missing_arg ARGNAME
# ------------------------
# Echo program name prefixed message to standard error and set global
# exit_cmd.
func_missing_arg ()
{
$debug_cmd
func_error "Missing argument for '$1'."
exit_cmd=exit
}
# func_split_equals STRING
# ------------------------
# Set func_split_equals_lhs and func_split_equals_rhs shell variables after
# splitting STRING at the '=' sign.
test -z "$_G_HAVE_XSI_OPS" \
&& (eval 'x=a/b/c;
test 5aa/bb/cc = "${#x}${x%%/*}${x%/*}${x#*/}${x##*/}"') 2>/dev/null \
&& _G_HAVE_XSI_OPS=yes
if test yes = "$_G_HAVE_XSI_OPS"
then
# This is an XSI compatible shell, allowing a faster implementation...
eval 'func_split_equals ()
{
$debug_cmd
func_split_equals_lhs=${1%%=*}
func_split_equals_rhs=${1#*=}
test "x$func_split_equals_lhs" = "x$1" \
&& func_split_equals_rhs=
}'
else
# ...otherwise fall back to using expr, which is often a shell builtin.
func_split_equals ()
{
$debug_cmd
func_split_equals_lhs=`expr "x$1" : 'x\([^=]*\)'`
func_split_equals_rhs=
test "x$func_split_equals_lhs" = "x$1" \
|| func_split_equals_rhs=`expr "x$1" : 'x[^=]*=\(.*\)$'`
}
fi #func_split_equals
# func_split_short_opt SHORTOPT
# -----------------------------
# Set func_split_short_opt_name and func_split_short_opt_arg shell
# variables after splitting SHORTOPT after the 2nd character.
if test yes = "$_G_HAVE_XSI_OPS"
then
# This is an XSI compatible shell, allowing a faster implementation...
eval 'func_split_short_opt ()
{
$debug_cmd
func_split_short_opt_arg=${1#??}
func_split_short_opt_name=${1%"$func_split_short_opt_arg"}
}'
else
# ...otherwise fall back to using expr, which is often a shell builtin.
func_split_short_opt ()
{
$debug_cmd
func_split_short_opt_name=`expr "x$1" : 'x-\(.\)'`
func_split_short_opt_arg=`expr "x$1" : 'x-.\(.*\)$'`
}
fi #func_split_short_opt
# func_usage
# ----------
# Echo short help message to standard output and exit.
func_usage ()
{
$debug_cmd
func_usage_message
$ECHO "Run '$progname --help |${PAGER-more}' for full usage"
exit 0
}
# func_usage_message
# ------------------
# Echo short help message to standard output.
func_usage_message ()
{
$debug_cmd
eval \$ECHO \""Usage: $usage"\"
echo
$SED -n 's|^# ||
/^Written by/{
x;p;x
}
h
/^Written by/q' < "$progpath"
echo
eval \$ECHO \""$usage_message"\"
}
# func_version
# ------------
# Echo version message to standard output and exit.
func_version ()
{
$debug_cmd
printf '%s\n' "$progname $scriptversion"
$SED -n '
/(C)/!b go
:more
/\./!{
N
s|\n# | |
b more
}
:go
/^# Written by /,/# warranty; / {
s|^# ||
s|^# *$||
s|\((C)\)[ 0-9,-]*[ ,-]\([1-9][0-9]* \)|\1 \2|
p
}
/^# Written by / {
s|^# ||
p
}
/^warranty; /q' < "$progpath"
exit $?
}
# Local variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'before-save-hook 'time-stamp)
# time-stamp-pattern: "10/scriptversion=%:y-%02m-%02d.%02H; # UTC"
# time-stamp-time-zone: "UTC"
# End:
# Set a version string.
scriptversion='(GNU libtool) 2.4.6'
# func_echo ARG...
# ----------------
# Libtool also displays the current mode in messages, so override
# funclib.sh func_echo with this custom definition.
func_echo ()
{
$debug_cmd
_G_message=$*
func_echo_IFS=$IFS
IFS=$nl
for _G_line in $_G_message; do
IFS=$func_echo_IFS
$ECHO "$progname${opt_mode+: $opt_mode}: $_G_line"
done
IFS=$func_echo_IFS
}
# func_warning ARG...
# -------------------
# Libtool warnings are not categorized, so override funclib.sh
# func_warning with this simpler definition.
func_warning ()
{
$debug_cmd
$warning_func ${1+"$@"}
}
## ---------------- ##
## Options parsing. ##
## ---------------- ##
# Hook in the functions to make sure our own options are parsed during
# the option parsing loop.
usage='$progpath [OPTION]... [MODE-ARG]...'
# Short help message in response to '-h'.
usage_message="Options:
--config show all configuration variables
--debug enable verbose shell tracing
-n, --dry-run display commands without modifying any files
--features display basic configuration information and exit
--mode=MODE use operation mode MODE
--no-warnings equivalent to '-Wnone'
--preserve-dup-deps don't remove duplicate dependency libraries
--quiet, --silent don't print informational messages
--tag=TAG use configuration variables from tag TAG
-v, --verbose print more informational messages than default
--version print version information
-W, --warnings=CATEGORY report the warnings falling in CATEGORY [all]
-h, --help, --help-all print short, long, or detailed help message
"
# Additional text appended to 'usage_message' in response to '--help'.
func_help ()
{
$debug_cmd
func_usage_message
$ECHO "$long_help_message
MODE must be one of the following:
clean remove files from the build directory
compile compile a source file into a libtool object
execute automatically set library path, then run a program
finish complete the installation of libtool libraries
install install libraries or executables
link create a library or an executable
uninstall remove libraries from an installed directory
MODE-ARGS vary depending on the MODE. When passed as first option,
'--mode=MODE' may be abbreviated as 'MODE' or a unique abbreviation of that.
Try '$progname --help --mode=MODE' for a more detailed description of MODE.
When reporting a bug, please describe a test case to reproduce it and
include the following information:
host-triplet: $host
shell: $SHELL
compiler: $LTCC
compiler flags: $LTCFLAGS
linker: $LD (gnu? $with_gnu_ld)
version: $progname $scriptversion Debian-2.4.6-2
automake: `($AUTOMAKE --version) 2>/dev/null |$SED 1q`
autoconf: `($AUTOCONF --version) 2>/dev/null |$SED 1q`
Report bugs to .
GNU libtool home page: .
General help using GNU software: ."
exit 0
}
# func_lo2o OBJECT-NAME
# ---------------------
# Transform OBJECT-NAME from a '.lo' suffix to the platform specific
# object suffix.
lo2o=s/\\.lo\$/.$objext/
o2lo=s/\\.$objext\$/.lo/
if test yes = "$_G_HAVE_XSI_OPS"; then
eval 'func_lo2o ()
{
case $1 in
*.lo) func_lo2o_result=${1%.lo}.$objext ;;
* ) func_lo2o_result=$1 ;;
esac
}'
# func_xform LIBOBJ-OR-SOURCE
# ---------------------------
# Transform LIBOBJ-OR-SOURCE from a '.o' or '.c' (or otherwise)
# suffix to a '.lo' libtool-object suffix.
eval 'func_xform ()
{
func_xform_result=${1%.*}.lo
}'
else
# ...otherwise fall back to using sed.
func_lo2o ()
{
func_lo2o_result=`$ECHO "$1" | $SED "$lo2o"`
}
func_xform ()
{
func_xform_result=`$ECHO "$1" | $SED 's|\.[^.]*$|.lo|'`
}
fi
# func_fatal_configuration ARG...
# -------------------------------
# Echo program name prefixed message to standard error, followed by
# a configuration failure hint, and exit.
func_fatal_configuration ()
{
func__fatal_error ${1+"$@"} \
"See the $PACKAGE documentation for more information." \
"Fatal configuration error."
}
# func_config
# -----------
# Display the configuration for all the tags in this script.
func_config ()
{
re_begincf='^# ### BEGIN LIBTOOL'
re_endcf='^# ### END LIBTOOL'
# Default configuration.
$SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath"
# Now print the configurations for the tags.
for tagname in $taglist; do
$SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath"
done
exit $?
}
# func_features
# -------------
# Display the features supported by this script.
func_features ()
{
echo "host: $host"
if test yes = "$build_libtool_libs"; then
echo "enable shared libraries"
else
echo "disable shared libraries"
fi
if test yes = "$build_old_libs"; then
echo "enable static libraries"
else
echo "disable static libraries"
fi
exit $?
}
# func_enable_tag TAGNAME
# -----------------------
# Verify that TAGNAME is valid, and either flag an error and exit, or
# enable the TAGNAME tag. We also add TAGNAME to the global $taglist
# variable here.
func_enable_tag ()
{
# Global variable:
tagname=$1
re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$"
re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$"
sed_extractcf=/$re_begincf/,/$re_endcf/p
# Validate tagname.
case $tagname in
*[!-_A-Za-z0-9,/]*)
func_fatal_error "invalid tag name: $tagname"
;;
esac
# Don't test for the "default" C tag, as we know it's
# there but not specially marked.
case $tagname in
CC) ;;
*)
if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then
taglist="$taglist $tagname"
# Evaluate the configuration. Be careful to quote the path
# and the sed script, to avoid splitting on whitespace, but
# also don't use non-portable quotes within backquotes within
# quotes we have to do it in 2 steps:
extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"`
eval "$extractedcf"
else
func_error "ignoring unknown tag $tagname"
fi
;;
esac
}
# func_check_version_match
# ------------------------
# Ensure that we are using m4 macros, and libtool script from the same
# release of libtool.
func_check_version_match ()
{
if test "$package_revision" != "$macro_revision"; then
if test "$VERSION" != "$macro_version"; then
if test -z "$macro_version"; then
cat >&2 <<_LT_EOF
$progname: Version mismatch error. This is $PACKAGE $VERSION, but the
$progname: definition of this LT_INIT comes from an older release.
$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
$progname: and run autoconf again.
_LT_EOF
else
cat >&2 <<_LT_EOF
$progname: Version mismatch error. This is $PACKAGE $VERSION, but the
$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
$progname: and run autoconf again.
_LT_EOF
fi
else
cat >&2 <<_LT_EOF
$progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision,
$progname: but the definition of this LT_INIT comes from revision $macro_revision.
$progname: You should recreate aclocal.m4 with macros from revision $package_revision
$progname: of $PACKAGE $VERSION and run autoconf again.
_LT_EOF
fi
exit $EXIT_MISMATCH
fi
}
# libtool_options_prep [ARG]...
# -----------------------------
# Preparation for options parsed by libtool.
libtool_options_prep ()
{
$debug_mode
# Option defaults:
opt_config=false
opt_dlopen=
opt_dry_run=false
opt_help=false
opt_mode=
opt_preserve_dup_deps=false
opt_quiet=false
nonopt=
preserve_args=
# Shorthand for --mode=foo, only valid as the first argument
case $1 in
clean|clea|cle|cl)
shift; set dummy --mode clean ${1+"$@"}; shift
;;
compile|compil|compi|comp|com|co|c)
shift; set dummy --mode compile ${1+"$@"}; shift
;;
execute|execut|execu|exec|exe|ex|e)
shift; set dummy --mode execute ${1+"$@"}; shift
;;
finish|finis|fini|fin|fi|f)
shift; set dummy --mode finish ${1+"$@"}; shift
;;
install|instal|insta|inst|ins|in|i)
shift; set dummy --mode install ${1+"$@"}; shift
;;
link|lin|li|l)
shift; set dummy --mode link ${1+"$@"}; shift
;;
uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
shift; set dummy --mode uninstall ${1+"$@"}; shift
;;
esac
# Pass back the list of options.
func_quote_for_eval ${1+"$@"}
libtool_options_prep_result=$func_quote_for_eval_result
}
func_add_hook func_options_prep libtool_options_prep
# libtool_parse_options [ARG]...
# ---------------------------------
# Provide handling for libtool specific options.
libtool_parse_options ()
{
$debug_cmd
# Perform our own loop to consume as many options as possible in
# each iteration.
while test $# -gt 0; do
_G_opt=$1
shift
case $_G_opt in
--dry-run|--dryrun|-n)
opt_dry_run=:
;;
--config) func_config ;;
--dlopen|-dlopen)
opt_dlopen="${opt_dlopen+$opt_dlopen
}$1"
shift
;;
--preserve-dup-deps)
opt_preserve_dup_deps=: ;;
--features) func_features ;;
--finish) set dummy --mode finish ${1+"$@"}; shift ;;
--help) opt_help=: ;;
--help-all) opt_help=': help-all' ;;
--mode) test $# = 0 && func_missing_arg $_G_opt && break
opt_mode=$1
case $1 in
# Valid mode arguments:
clean|compile|execute|finish|install|link|relink|uninstall) ;;
# Catch anything else as an error
*) func_error "invalid argument for $_G_opt"
exit_cmd=exit
break
;;
esac
shift
;;
--no-silent|--no-quiet)
opt_quiet=false
func_append preserve_args " $_G_opt"
;;
--no-warnings|--no-warning|--no-warn)
opt_warning=false
func_append preserve_args " $_G_opt"
;;
--no-verbose)
opt_verbose=false
func_append preserve_args " $_G_opt"
;;
--silent|--quiet)
opt_quiet=:
opt_verbose=false
func_append preserve_args " $_G_opt"
;;
--tag) test $# = 0 && func_missing_arg $_G_opt && break
opt_tag=$1
func_append preserve_args " $_G_opt $1"
func_enable_tag "$1"
shift
;;
--verbose|-v) opt_quiet=false
opt_verbose=:
func_append preserve_args " $_G_opt"
;;
# An option not handled by this hook function:
*) set dummy "$_G_opt" ${1+"$@"}; shift; break ;;
esac
done
# save modified positional parameters for caller
func_quote_for_eval ${1+"$@"}
libtool_parse_options_result=$func_quote_for_eval_result
}
func_add_hook func_parse_options libtool_parse_options
# libtool_validate_options [ARG]...
# ---------------------------------
# Perform any sanity checks on option settings and/or unconsumed
# arguments.
libtool_validate_options ()
{
# save first non-option argument
if test 0 -lt $#; then
nonopt=$1
shift
fi
# preserve --debug
test : = "$debug_cmd" || func_append preserve_args " --debug"
case $host in
# Solaris2 added to fix http://debbugs.gnu.org/cgi/bugreport.cgi?bug=16452
# see also: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=59788
*cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* | *os2*)
# don't eliminate duplications in $postdeps and $predeps
opt_duplicate_compiler_generated_deps=:
;;
*)
opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps
;;
esac
$opt_help || {
# Sanity checks first:
func_check_version_match
test yes != "$build_libtool_libs" \
&& test yes != "$build_old_libs" \
&& func_fatal_configuration "not configured to build any kind of library"
# Darwin sucks
eval std_shrext=\"$shrext_cmds\"
# Only execute mode is allowed to have -dlopen flags.
if test -n "$opt_dlopen" && test execute != "$opt_mode"; then
func_error "unrecognized option '-dlopen'"
$ECHO "$help" 1>&2
exit $EXIT_FAILURE
fi
# Change the help message to a mode-specific one.
generic_help=$help
help="Try '$progname --help --mode=$opt_mode' for more information."
}
# Pass back the unparsed argument list
func_quote_for_eval ${1+"$@"}
libtool_validate_options_result=$func_quote_for_eval_result
}
func_add_hook func_validate_options libtool_validate_options
# Process options as early as possible so that --help and --version
# can return quickly.
func_options ${1+"$@"}
eval set dummy "$func_options_result"; shift
## ----------- ##
## Main. ##
## ----------- ##
magic='%%%MAGIC variable%%%'
magic_exe='%%%MAGIC EXE variable%%%'
# Global variables.
extracted_archives=
extracted_serial=0
# If this variable is set in any of the actions, the command in it
# will be execed at the end. This prevents here-documents from being
# left over by shells.
exec_cmd=
# A function that is used when there is no print builtin or printf.
func_fallback_echo ()
{
eval 'cat <<_LTECHO_EOF
$1
_LTECHO_EOF'
}
# func_generated_by_libtool
# True iff stdin has been generated by Libtool. This function is only
# a basic sanity check; it will hardly flush out determined imposters.
func_generated_by_libtool_p ()
{
$GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1
}
# func_lalib_p file
# True iff FILE is a libtool '.la' library or '.lo' object file.
# This function is only a basic sanity check; it will hardly flush out
# determined imposters.
func_lalib_p ()
{
test -f "$1" &&
$SED -e 4q "$1" 2>/dev/null | func_generated_by_libtool_p
}
# func_lalib_unsafe_p file
# True iff FILE is a libtool '.la' library or '.lo' object file.
# This function implements the same check as func_lalib_p without
# resorting to external programs. To this end, it redirects stdin and
# closes it afterwards, without saving the original file descriptor.
# As a safety measure, use it only where a negative result would be
# fatal anyway. Works if 'file' does not exist.
func_lalib_unsafe_p ()
{
lalib_p=no
if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then
for lalib_p_l in 1 2 3 4
do
read lalib_p_line
case $lalib_p_line in
\#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;;
esac
done
exec 0<&5 5<&-
fi
test yes = "$lalib_p"
}
# func_ltwrapper_script_p file
# True iff FILE is a libtool wrapper script
# This function is only a basic sanity check; it will hardly flush out
# determined imposters.
func_ltwrapper_script_p ()
{
test -f "$1" &&
$lt_truncate_bin < "$1" 2>/dev/null | func_generated_by_libtool_p
}
# func_ltwrapper_executable_p file
# True iff FILE is a libtool wrapper executable
# This function is only a basic sanity check; it will hardly flush out
# determined imposters.
func_ltwrapper_executable_p ()
{
func_ltwrapper_exec_suffix=
case $1 in
*.exe) ;;
*) func_ltwrapper_exec_suffix=.exe ;;
esac
$GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1
}
# func_ltwrapper_scriptname file
# Assumes file is an ltwrapper_executable
# uses $file to determine the appropriate filename for a
# temporary ltwrapper_script.
func_ltwrapper_scriptname ()
{
func_dirname_and_basename "$1" "" "."
func_stripname '' '.exe' "$func_basename_result"
func_ltwrapper_scriptname_result=$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper
}
# func_ltwrapper_p file
# True iff FILE is a libtool wrapper script or wrapper executable
# This function is only a basic sanity check; it will hardly flush out
# determined imposters.
func_ltwrapper_p ()
{
func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1"
}
# func_execute_cmds commands fail_cmd
# Execute tilde-delimited COMMANDS.
# If FAIL_CMD is given, eval that upon failure.
# FAIL_CMD may read-access the current command in variable CMD!
func_execute_cmds ()
{
$debug_cmd
save_ifs=$IFS; IFS='~'
for cmd in $1; do
IFS=$sp$nl
eval cmd=\"$cmd\"
IFS=$save_ifs
func_show_eval "$cmd" "${2-:}"
done
IFS=$save_ifs
}
# func_source file
# Source FILE, adding directory component if necessary.
# Note that it is not necessary on cygwin/mingw to append a dot to
# FILE even if both FILE and FILE.exe exist: automatic-append-.exe
# behavior happens only for exec(3), not for open(2)! Also, sourcing
# 'FILE.' does not work on cygwin managed mounts.
func_source ()
{
$debug_cmd
case $1 in
*/* | *\\*) . "$1" ;;
*) . "./$1" ;;
esac
}
# func_resolve_sysroot PATH
# Replace a leading = in PATH with a sysroot. Store the result into
# func_resolve_sysroot_result
func_resolve_sysroot ()
{
func_resolve_sysroot_result=$1
case $func_resolve_sysroot_result in
=*)
func_stripname '=' '' "$func_resolve_sysroot_result"
func_resolve_sysroot_result=$lt_sysroot$func_stripname_result
;;
esac
}
# func_replace_sysroot PATH
# If PATH begins with the sysroot, replace it with = and
# store the result into func_replace_sysroot_result.
func_replace_sysroot ()
{
case $lt_sysroot:$1 in
?*:"$lt_sysroot"*)
func_stripname "$lt_sysroot" '' "$1"
func_replace_sysroot_result='='$func_stripname_result
;;
*)
# Including no sysroot.
func_replace_sysroot_result=$1
;;
esac
}
# func_infer_tag arg
# Infer tagged configuration to use if any are available and
# if one wasn't chosen via the "--tag" command line option.
# Only attempt this if the compiler in the base compile
# command doesn't match the default compiler.
# arg is usually of the form 'gcc ...'
func_infer_tag ()
{
$debug_cmd
if test -n "$available_tags" && test -z "$tagname"; then
CC_quoted=
for arg in $CC; do
func_append_quoted CC_quoted "$arg"
done
CC_expanded=`func_echo_all $CC`
CC_quoted_expanded=`func_echo_all $CC_quoted`
case $@ in
# Blanks in the command may have been stripped by the calling shell,
# but not from the CC environment variable when configure was run.
" $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \
" $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) ;;
# Blanks at the start of $base_compile will cause this to fail
# if we don't check for them as well.
*)
for z in $available_tags; do
if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then
# Evaluate the configuration.
eval "`$SED -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`"
CC_quoted=
for arg in $CC; do
# Double-quote args containing other shell metacharacters.
func_append_quoted CC_quoted "$arg"
done
CC_expanded=`func_echo_all $CC`
CC_quoted_expanded=`func_echo_all $CC_quoted`
case "$@ " in
" $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \
" $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*)
# The compiler in the base compile command matches
# the one in the tagged configuration.
# Assume this is the tagged configuration we want.
tagname=$z
break
;;
esac
fi
done
# If $tagname still isn't set, then no tagged configuration
# was found and let the user know that the "--tag" command
# line option must be used.
if test -z "$tagname"; then
func_echo "unable to infer tagged configuration"
func_fatal_error "specify a tag with '--tag'"
# else
# func_verbose "using $tagname tagged configuration"
fi
;;
esac
fi
}
# func_write_libtool_object output_name pic_name nonpic_name
# Create a libtool object file (analogous to a ".la" file),
# but don't create it if we're doing a dry run.
func_write_libtool_object ()
{
write_libobj=$1
if test yes = "$build_libtool_libs"; then
write_lobj=\'$2\'
else
write_lobj=none
fi
if test yes = "$build_old_libs"; then
write_oldobj=\'$3\'
else
write_oldobj=none
fi
$opt_dry_run || {
cat >${write_libobj}T </dev/null`
if test "$?" -eq 0 && test -n "$func_convert_core_file_wine_to_w32_tmp"; then
func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" |
$SED -e "$sed_naive_backslashify"`
else
func_convert_core_file_wine_to_w32_result=
fi
fi
}
# end: func_convert_core_file_wine_to_w32
# func_convert_core_path_wine_to_w32 ARG
# Helper function used by path conversion functions when $build is *nix, and
# $host is mingw, cygwin, or some other w32 environment. Relies on a correctly
# configured wine environment available, with the winepath program in $build's
# $PATH. Assumes ARG has no leading or trailing path separator characters.
#
# ARG is path to be converted from $build format to win32.
# Result is available in $func_convert_core_path_wine_to_w32_result.
# Unconvertible file (directory) names in ARG are skipped; if no directory names
# are convertible, then the result may be empty.
func_convert_core_path_wine_to_w32 ()
{
$debug_cmd
# unfortunately, winepath doesn't convert paths, only file names
func_convert_core_path_wine_to_w32_result=
if test -n "$1"; then
oldIFS=$IFS
IFS=:
for func_convert_core_path_wine_to_w32_f in $1; do
IFS=$oldIFS
func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f"
if test -n "$func_convert_core_file_wine_to_w32_result"; then
if test -z "$func_convert_core_path_wine_to_w32_result"; then
func_convert_core_path_wine_to_w32_result=$func_convert_core_file_wine_to_w32_result
else
func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result"
fi
fi
done
IFS=$oldIFS
fi
}
# end: func_convert_core_path_wine_to_w32
# func_cygpath ARGS...
# Wrapper around calling the cygpath program via LT_CYGPATH. This is used when
# when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2)
# $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or
# (2), returns the Cygwin file name or path in func_cygpath_result (input
# file name or path is assumed to be in w32 format, as previously converted
# from $build's *nix or MSYS format). In case (3), returns the w32 file name
# or path in func_cygpath_result (input file name or path is assumed to be in
# Cygwin format). Returns an empty string on error.
#
# ARGS are passed to cygpath, with the last one being the file name or path to
# be converted.
#
# Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH
# environment variable; do not put it in $PATH.
func_cygpath ()
{
$debug_cmd
if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then
func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null`
if test "$?" -ne 0; then
# on failure, ensure result is empty
func_cygpath_result=
fi
else
func_cygpath_result=
func_error "LT_CYGPATH is empty or specifies non-existent file: '$LT_CYGPATH'"
fi
}
#end: func_cygpath
# func_convert_core_msys_to_w32 ARG
# Convert file name or path ARG from MSYS format to w32 format. Return
# result in func_convert_core_msys_to_w32_result.
func_convert_core_msys_to_w32 ()
{
$debug_cmd
# awkward: cmd appends spaces to result
func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null |
$SED -e 's/[ ]*$//' -e "$sed_naive_backslashify"`
}
#end: func_convert_core_msys_to_w32
# func_convert_file_check ARG1 ARG2
# Verify that ARG1 (a file name in $build format) was converted to $host
# format in ARG2. Otherwise, emit an error message, but continue (resetting
# func_to_host_file_result to ARG1).
func_convert_file_check ()
{
$debug_cmd
if test -z "$2" && test -n "$1"; then
func_error "Could not determine host file name corresponding to"
func_error " '$1'"
func_error "Continuing, but uninstalled executables may not work."
# Fallback:
func_to_host_file_result=$1
fi
}
# end func_convert_file_check
# func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH
# Verify that FROM_PATH (a path in $build format) was converted to $host
# format in TO_PATH. Otherwise, emit an error message, but continue, resetting
# func_to_host_file_result to a simplistic fallback value (see below).
func_convert_path_check ()
{
$debug_cmd
if test -z "$4" && test -n "$3"; then
func_error "Could not determine the host path corresponding to"
func_error " '$3'"
func_error "Continuing, but uninstalled executables may not work."
# Fallback. This is a deliberately simplistic "conversion" and
# should not be "improved". See libtool.info.
if test "x$1" != "x$2"; then
lt_replace_pathsep_chars="s|$1|$2|g"
func_to_host_path_result=`echo "$3" |
$SED -e "$lt_replace_pathsep_chars"`
else
func_to_host_path_result=$3
fi
fi
}
# end func_convert_path_check
# func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG
# Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT
# and appending REPL if ORIG matches BACKPAT.
func_convert_path_front_back_pathsep ()
{
$debug_cmd
case $4 in
$1 ) func_to_host_path_result=$3$func_to_host_path_result
;;
esac
case $4 in
$2 ) func_append func_to_host_path_result "$3"
;;
esac
}
# end func_convert_path_front_back_pathsep
##################################################
# $build to $host FILE NAME CONVERSION FUNCTIONS #
##################################################
# invoked via '$to_host_file_cmd ARG'
#
# In each case, ARG is the path to be converted from $build to $host format.
# Result will be available in $func_to_host_file_result.
# func_to_host_file ARG
# Converts the file name ARG from $build format to $host format. Return result
# in func_to_host_file_result.
func_to_host_file ()
{
$debug_cmd
$to_host_file_cmd "$1"
}
# end func_to_host_file
# func_to_tool_file ARG LAZY
# converts the file name ARG from $build format to toolchain format. Return
# result in func_to_tool_file_result. If the conversion in use is listed
# in (the comma separated) LAZY, no conversion takes place.
func_to_tool_file ()
{
$debug_cmd
case ,$2, in
*,"$to_tool_file_cmd",*)
func_to_tool_file_result=$1
;;
*)
$to_tool_file_cmd "$1"
func_to_tool_file_result=$func_to_host_file_result
;;
esac
}
# end func_to_tool_file
# func_convert_file_noop ARG
# Copy ARG to func_to_host_file_result.
func_convert_file_noop ()
{
func_to_host_file_result=$1
}
# end func_convert_file_noop
# func_convert_file_msys_to_w32 ARG
# Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic
# conversion to w32 is not available inside the cwrapper. Returns result in
# func_to_host_file_result.
func_convert_file_msys_to_w32 ()
{
$debug_cmd
func_to_host_file_result=$1
if test -n "$1"; then
func_convert_core_msys_to_w32 "$1"
func_to_host_file_result=$func_convert_core_msys_to_w32_result
fi
func_convert_file_check "$1" "$func_to_host_file_result"
}
# end func_convert_file_msys_to_w32
# func_convert_file_cygwin_to_w32 ARG
# Convert file name ARG from Cygwin to w32 format. Returns result in
# func_to_host_file_result.
func_convert_file_cygwin_to_w32 ()
{
$debug_cmd
func_to_host_file_result=$1
if test -n "$1"; then
# because $build is cygwin, we call "the" cygpath in $PATH; no need to use
# LT_CYGPATH in this case.
func_to_host_file_result=`cygpath -m "$1"`
fi
func_convert_file_check "$1" "$func_to_host_file_result"
}
# end func_convert_file_cygwin_to_w32
# func_convert_file_nix_to_w32 ARG
# Convert file name ARG from *nix to w32 format. Requires a wine environment
# and a working winepath. Returns result in func_to_host_file_result.
func_convert_file_nix_to_w32 ()
{
$debug_cmd
func_to_host_file_result=$1
if test -n "$1"; then
func_convert_core_file_wine_to_w32 "$1"
func_to_host_file_result=$func_convert_core_file_wine_to_w32_result
fi
func_convert_file_check "$1" "$func_to_host_file_result"
}
# end func_convert_file_nix_to_w32
# func_convert_file_msys_to_cygwin ARG
# Convert file name ARG from MSYS to Cygwin format. Requires LT_CYGPATH set.
# Returns result in func_to_host_file_result.
func_convert_file_msys_to_cygwin ()
{
$debug_cmd
func_to_host_file_result=$1
if test -n "$1"; then
func_convert_core_msys_to_w32 "$1"
func_cygpath -u "$func_convert_core_msys_to_w32_result"
func_to_host_file_result=$func_cygpath_result
fi
func_convert_file_check "$1" "$func_to_host_file_result"
}
# end func_convert_file_msys_to_cygwin
# func_convert_file_nix_to_cygwin ARG
# Convert file name ARG from *nix to Cygwin format. Requires Cygwin installed
# in a wine environment, working winepath, and LT_CYGPATH set. Returns result
# in func_to_host_file_result.
func_convert_file_nix_to_cygwin ()
{
$debug_cmd
func_to_host_file_result=$1
if test -n "$1"; then
# convert from *nix to w32, then use cygpath to convert from w32 to cygwin.
func_convert_core_file_wine_to_w32 "$1"
func_cygpath -u "$func_convert_core_file_wine_to_w32_result"
func_to_host_file_result=$func_cygpath_result
fi
func_convert_file_check "$1" "$func_to_host_file_result"
}
# end func_convert_file_nix_to_cygwin
#############################################
# $build to $host PATH CONVERSION FUNCTIONS #
#############################################
# invoked via '$to_host_path_cmd ARG'
#
# In each case, ARG is the path to be converted from $build to $host format.
# The result will be available in $func_to_host_path_result.
#
# Path separators are also converted from $build format to $host format. If
# ARG begins or ends with a path separator character, it is preserved (but
# converted to $host format) on output.
#
# All path conversion functions are named using the following convention:
# file name conversion function : func_convert_file_X_to_Y ()
# path conversion function : func_convert_path_X_to_Y ()
# where, for any given $build/$host combination the 'X_to_Y' value is the
# same. If conversion functions are added for new $build/$host combinations,
# the two new functions must follow this pattern, or func_init_to_host_path_cmd
# will break.
# func_init_to_host_path_cmd
# Ensures that function "pointer" variable $to_host_path_cmd is set to the
# appropriate value, based on the value of $to_host_file_cmd.
to_host_path_cmd=
func_init_to_host_path_cmd ()
{
$debug_cmd
if test -z "$to_host_path_cmd"; then
func_stripname 'func_convert_file_' '' "$to_host_file_cmd"
to_host_path_cmd=func_convert_path_$func_stripname_result
fi
}
# func_to_host_path ARG
# Converts the path ARG from $build format to $host format. Return result
# in func_to_host_path_result.
func_to_host_path ()
{
$debug_cmd
func_init_to_host_path_cmd
$to_host_path_cmd "$1"
}
# end func_to_host_path
# func_convert_path_noop ARG
# Copy ARG to func_to_host_path_result.
func_convert_path_noop ()
{
func_to_host_path_result=$1
}
# end func_convert_path_noop
# func_convert_path_msys_to_w32 ARG
# Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic
# conversion to w32 is not available inside the cwrapper. Returns result in
# func_to_host_path_result.
func_convert_path_msys_to_w32 ()
{
$debug_cmd
func_to_host_path_result=$1
if test -n "$1"; then
# Remove leading and trailing path separator characters from ARG. MSYS
# behavior is inconsistent here; cygpath turns them into '.;' and ';.';
# and winepath ignores them completely.
func_stripname : : "$1"
func_to_host_path_tmp1=$func_stripname_result
func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
func_to_host_path_result=$func_convert_core_msys_to_w32_result
func_convert_path_check : ";" \
"$func_to_host_path_tmp1" "$func_to_host_path_result"
func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
fi
}
# end func_convert_path_msys_to_w32
# func_convert_path_cygwin_to_w32 ARG
# Convert path ARG from Cygwin to w32 format. Returns result in
# func_to_host_file_result.
func_convert_path_cygwin_to_w32 ()
{
$debug_cmd
func_to_host_path_result=$1
if test -n "$1"; then
# See func_convert_path_msys_to_w32:
func_stripname : : "$1"
func_to_host_path_tmp1=$func_stripname_result
func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"`
func_convert_path_check : ";" \
"$func_to_host_path_tmp1" "$func_to_host_path_result"
func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
fi
}
# end func_convert_path_cygwin_to_w32
# func_convert_path_nix_to_w32 ARG
# Convert path ARG from *nix to w32 format. Requires a wine environment and
# a working winepath. Returns result in func_to_host_file_result.
func_convert_path_nix_to_w32 ()
{
$debug_cmd
func_to_host_path_result=$1
if test -n "$1"; then
# See func_convert_path_msys_to_w32:
func_stripname : : "$1"
func_to_host_path_tmp1=$func_stripname_result
func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
func_to_host_path_result=$func_convert_core_path_wine_to_w32_result
func_convert_path_check : ";" \
"$func_to_host_path_tmp1" "$func_to_host_path_result"
func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
fi
}
# end func_convert_path_nix_to_w32
# func_convert_path_msys_to_cygwin ARG
# Convert path ARG from MSYS to Cygwin format. Requires LT_CYGPATH set.
# Returns result in func_to_host_file_result.
func_convert_path_msys_to_cygwin ()
{
$debug_cmd
func_to_host_path_result=$1
if test -n "$1"; then
# See func_convert_path_msys_to_w32:
func_stripname : : "$1"
func_to_host_path_tmp1=$func_stripname_result
func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
func_cygpath -u -p "$func_convert_core_msys_to_w32_result"
func_to_host_path_result=$func_cygpath_result
func_convert_path_check : : \
"$func_to_host_path_tmp1" "$func_to_host_path_result"
func_convert_path_front_back_pathsep ":*" "*:" : "$1"
fi
}
# end func_convert_path_msys_to_cygwin
# func_convert_path_nix_to_cygwin ARG
# Convert path ARG from *nix to Cygwin format. Requires Cygwin installed in a
# a wine environment, working winepath, and LT_CYGPATH set. Returns result in
# func_to_host_file_result.
func_convert_path_nix_to_cygwin ()
{
$debug_cmd
func_to_host_path_result=$1
if test -n "$1"; then
# Remove leading and trailing path separator characters from
# ARG. msys behavior is inconsistent here, cygpath turns them
# into '.;' and ';.', and winepath ignores them completely.
func_stripname : : "$1"
func_to_host_path_tmp1=$func_stripname_result
func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result"
func_to_host_path_result=$func_cygpath_result
func_convert_path_check : : \
"$func_to_host_path_tmp1" "$func_to_host_path_result"
func_convert_path_front_back_pathsep ":*" "*:" : "$1"
fi
}
# end func_convert_path_nix_to_cygwin
# func_dll_def_p FILE
# True iff FILE is a Windows DLL '.def' file.
# Keep in sync with _LT_DLL_DEF_P in libtool.m4
func_dll_def_p ()
{
$debug_cmd
func_dll_def_p_tmp=`$SED -n \
-e 's/^[ ]*//' \
-e '/^\(;.*\)*$/d' \
-e 's/^\(EXPORTS\|LIBRARY\)\([ ].*\)*$/DEF/p' \
-e q \
"$1"`
test DEF = "$func_dll_def_p_tmp"
}
# func_mode_compile arg...
func_mode_compile ()
{
$debug_cmd
# Get the compilation command and the source file.
base_compile=
srcfile=$nonopt # always keep a non-empty value in "srcfile"
suppress_opt=yes
suppress_output=
arg_mode=normal
libobj=
later=
pie_flag=
for arg
do
case $arg_mode in
arg )
# do not "continue". Instead, add this to base_compile
lastarg=$arg
arg_mode=normal
;;
target )
libobj=$arg
arg_mode=normal
continue
;;
normal )
# Accept any command-line options.
case $arg in
-o)
test -n "$libobj" && \
func_fatal_error "you cannot specify '-o' more than once"
arg_mode=target
continue
;;
-pie | -fpie | -fPIE)
func_append pie_flag " $arg"
continue
;;
-shared | -static | -prefer-pic | -prefer-non-pic)
func_append later " $arg"
continue
;;
-no-suppress)
suppress_opt=no
continue
;;
-Xcompiler)
arg_mode=arg # the next one goes into the "base_compile" arg list
continue # The current "srcfile" will either be retained or
;; # replaced later. I would guess that would be a bug.
-Wc,*)
func_stripname '-Wc,' '' "$arg"
args=$func_stripname_result
lastarg=
save_ifs=$IFS; IFS=,
for arg in $args; do
IFS=$save_ifs
func_append_quoted lastarg "$arg"
done
IFS=$save_ifs
func_stripname ' ' '' "$lastarg"
lastarg=$func_stripname_result
# Add the arguments to base_compile.
func_append base_compile " $lastarg"
continue
;;
*)
# Accept the current argument as the source file.
# The previous "srcfile" becomes the current argument.
#
lastarg=$srcfile
srcfile=$arg
;;
esac # case $arg
;;
esac # case $arg_mode
# Aesthetically quote the previous argument.
func_append_quoted base_compile "$lastarg"
done # for arg
case $arg_mode in
arg)
func_fatal_error "you must specify an argument for -Xcompile"
;;
target)
func_fatal_error "you must specify a target with '-o'"
;;
*)
# Get the name of the library object.
test -z "$libobj" && {
func_basename "$srcfile"
libobj=$func_basename_result
}
;;
esac
# Recognize several different file suffixes.
# If the user specifies -o file.o, it is replaced with file.lo
case $libobj in
*.[cCFSifmso] | \
*.ada | *.adb | *.ads | *.asm | \
*.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \
*.[fF][09]? | *.for | *.java | *.go | *.obj | *.sx | *.cu | *.cup)
func_xform "$libobj"
libobj=$func_xform_result
;;
esac
case $libobj in
*.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;;
*)
func_fatal_error "cannot determine name of library object from '$libobj'"
;;
esac
func_infer_tag $base_compile
for arg in $later; do
case $arg in
-shared)
test yes = "$build_libtool_libs" \
|| func_fatal_configuration "cannot build a shared library"
build_old_libs=no
continue
;;
-static)
build_libtool_libs=no
build_old_libs=yes
continue
;;
-prefer-pic)
pic_mode=yes
continue
;;
-prefer-non-pic)
pic_mode=no
continue
;;
esac
done
func_quote_for_eval "$libobj"
test "X$libobj" != "X$func_quote_for_eval_result" \
&& $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \
&& func_warning "libobj name '$libobj' may not contain shell special characters."
func_dirname_and_basename "$obj" "/" ""
objname=$func_basename_result
xdir=$func_dirname_result
lobj=$xdir$objdir/$objname
test -z "$base_compile" && \
func_fatal_help "you must specify a compilation command"
# Delete any leftover library objects.
if test yes = "$build_old_libs"; then
removelist="$obj $lobj $libobj ${libobj}T"
else
removelist="$lobj $libobj ${libobj}T"
fi
# On Cygwin there's no "real" PIC flag so we must build both object types
case $host_os in
cygwin* | mingw* | pw32* | os2* | cegcc*)
pic_mode=default
;;
esac
if test no = "$pic_mode" && test pass_all != "$deplibs_check_method"; then
# non-PIC code in shared libraries is not supported
pic_mode=default
fi
# Calculate the filename of the output object if compiler does
# not support -o with -c
if test no = "$compiler_c_o"; then
output_obj=`$ECHO "$srcfile" | $SED 's%^.*/%%; s%\.[^.]*$%%'`.$objext
lockfile=$output_obj.lock
else
output_obj=
need_locks=no
lockfile=
fi
# Lock this critical section if it is needed
# We use this script file to make the link, it avoids creating a new file
if test yes = "$need_locks"; then
until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do
func_echo "Waiting for $lockfile to be removed"
sleep 2
done
elif test warn = "$need_locks"; then
if test -f "$lockfile"; then
$ECHO "\
*** ERROR, $lockfile exists and contains:
`cat $lockfile 2>/dev/null`
This indicates that another process is trying to use the same
temporary object file, and libtool could not work around it because
your compiler does not support '-c' and '-o' together. If you
repeat this compilation, it may succeed, by chance, but you had better
avoid parallel builds (make -j) in this platform, or get a better
compiler."
$opt_dry_run || $RM $removelist
exit $EXIT_FAILURE
fi
func_append removelist " $output_obj"
$ECHO "$srcfile" > "$lockfile"
fi
$opt_dry_run || $RM $removelist
func_append removelist " $lockfile"
trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15
func_to_tool_file "$srcfile" func_convert_file_msys_to_w32
srcfile=$func_to_tool_file_result
func_quote_for_eval "$srcfile"
qsrcfile=$func_quote_for_eval_result
# Only build a PIC object if we are building libtool libraries.
if test yes = "$build_libtool_libs"; then
# Without this assignment, base_compile gets emptied.
fbsd_hideous_sh_bug=$base_compile
if test no != "$pic_mode"; then
command="$base_compile $qsrcfile $pic_flag"
else
# Don't build PIC code
command="$base_compile $qsrcfile"
fi
func_mkdir_p "$xdir$objdir"
if test -z "$output_obj"; then
# Place PIC objects in $objdir
func_append command " -o $lobj"
fi
func_show_eval_locale "$command" \
'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE'
if test warn = "$need_locks" &&
test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then
$ECHO "\
*** ERROR, $lockfile contains:
`cat $lockfile 2>/dev/null`
but it should contain:
$srcfile
This indicates that another process is trying to use the same
temporary object file, and libtool could not work around it because
your compiler does not support '-c' and '-o' together. If you
repeat this compilation, it may succeed, by chance, but you had better
avoid parallel builds (make -j) in this platform, or get a better
compiler."
$opt_dry_run || $RM $removelist
exit $EXIT_FAILURE
fi
# Just move the object if needed, then go on to compile the next one
if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then
func_show_eval '$MV "$output_obj" "$lobj"' \
'error=$?; $opt_dry_run || $RM $removelist; exit $error'
fi
# Allow error messages only from the first compilation.
if test yes = "$suppress_opt"; then
suppress_output=' >/dev/null 2>&1'
fi
fi
# Only build a position-dependent object if we build old libraries.
if test yes = "$build_old_libs"; then
if test yes != "$pic_mode"; then
# Don't build PIC code
command="$base_compile $qsrcfile$pie_flag"
else
command="$base_compile $qsrcfile $pic_flag"
fi
if test yes = "$compiler_c_o"; then
func_append command " -o $obj"
fi
# Suppress compiler output if we already did a PIC compilation.
func_append command "$suppress_output"
func_show_eval_locale "$command" \
'$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE'
if test warn = "$need_locks" &&
test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then
$ECHO "\
*** ERROR, $lockfile contains:
`cat $lockfile 2>/dev/null`
but it should contain:
$srcfile
This indicates that another process is trying to use the same
temporary object file, and libtool could not work around it because
your compiler does not support '-c' and '-o' together. If you
repeat this compilation, it may succeed, by chance, but you had better
avoid parallel builds (make -j) in this platform, or get a better
compiler."
$opt_dry_run || $RM $removelist
exit $EXIT_FAILURE
fi
# Just move the object if needed
if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then
func_show_eval '$MV "$output_obj" "$obj"' \
'error=$?; $opt_dry_run || $RM $removelist; exit $error'
fi
fi
$opt_dry_run || {
func_write_libtool_object "$libobj" "$objdir/$objname" "$objname"
# Unlock the critical section if it was locked
if test no != "$need_locks"; then
removelist=$lockfile
$RM "$lockfile"
fi
}
exit $EXIT_SUCCESS
}
$opt_help || {
test compile = "$opt_mode" && func_mode_compile ${1+"$@"}
}
func_mode_help ()
{
# We need to display help for each of the modes.
case $opt_mode in
"")
# Generic help is extracted from the usage comments
# at the start of this file.
func_help
;;
clean)
$ECHO \
"Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE...
Remove files from the build directory.
RM is the name of the program to use to delete files associated with each FILE
(typically '/bin/rm'). RM-OPTIONS are options (such as '-f') to be passed
to RM.
If FILE is a libtool library, object or program, all the files associated
with it are deleted. Otherwise, only FILE itself is deleted using RM."
;;
compile)
$ECHO \
"Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE
Compile a source file into a libtool library object.
This mode accepts the following additional options:
-o OUTPUT-FILE set the output file name to OUTPUT-FILE
-no-suppress do not suppress compiler output for multiple passes
-prefer-pic try to build PIC objects only
-prefer-non-pic try to build non-PIC objects only
-shared do not build a '.o' file suitable for static linking
-static only build a '.o' file suitable for static linking
-Wc,FLAG pass FLAG directly to the compiler
COMPILE-COMMAND is a command to be used in creating a 'standard' object file
from the given SOURCEFILE.
The output file name is determined by removing the directory component from
SOURCEFILE, then substituting the C source code suffix '.c' with the
library object suffix, '.lo'."
;;
execute)
$ECHO \
"Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]...
Automatically set library path, then run a program.
This mode accepts the following additional options:
-dlopen FILE add the directory containing FILE to the library path
This mode sets the library path environment variable according to '-dlopen'
flags.
If any of the ARGS are libtool executable wrappers, then they are translated
into their corresponding uninstalled binary, and any of their required library
directories are added to the library path.
Then, COMMAND is executed, with ARGS as arguments."
;;
finish)
$ECHO \
"Usage: $progname [OPTION]... --mode=finish [LIBDIR]...
Complete the installation of libtool libraries.
Each LIBDIR is a directory that contains libtool libraries.
The commands that this mode executes may require superuser privileges. Use
the '--dry-run' option if you just want to see what would be executed."
;;
install)
$ECHO \
"Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND...
Install executables or libraries.
INSTALL-COMMAND is the installation command. The first component should be
either the 'install' or 'cp' program.
The following components of INSTALL-COMMAND are treated specially:
-inst-prefix-dir PREFIX-DIR Use PREFIX-DIR as a staging area for installation
The rest of the components are interpreted as arguments to that command (only
BSD-compatible install options are recognized)."
;;
link)
$ECHO \
"Usage: $progname [OPTION]... --mode=link LINK-COMMAND...
Link object files or libraries together to form another library, or to
create an executable program.
LINK-COMMAND is a command using the C compiler that you would use to create
a program from several object files.
The following components of LINK-COMMAND are treated specially:
-all-static do not do any dynamic linking at all
-avoid-version do not add a version suffix if possible
-bindir BINDIR specify path to binaries directory (for systems where
libraries must be found in the PATH setting at runtime)
-dlopen FILE '-dlpreopen' FILE if it cannot be dlopened at runtime
-dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols
-export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3)
-export-symbols SYMFILE
try to export only the symbols listed in SYMFILE
-export-symbols-regex REGEX
try to export only the symbols matching REGEX
-LLIBDIR search LIBDIR for required installed libraries
-lNAME OUTPUT-FILE requires the installed library libNAME
-module build a library that can dlopened
-no-fast-install disable the fast-install mode
-no-install link a not-installable executable
-no-undefined declare that a library does not refer to external symbols
-o OUTPUT-FILE create OUTPUT-FILE from the specified objects
-objectlist FILE use a list of object files found in FILE to specify objects
-os2dllname NAME force a short DLL name on OS/2 (no effect on other OSes)
-precious-files-regex REGEX
don't remove output files matching REGEX
-release RELEASE specify package release information
-rpath LIBDIR the created library will eventually be installed in LIBDIR
-R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries
-shared only do dynamic linking of libtool libraries
-shrext SUFFIX override the standard shared library file extension
-static do not do any dynamic linking of uninstalled libtool libraries
-static-libtool-libs
do not do any dynamic linking of libtool libraries
-version-info CURRENT[:REVISION[:AGE]]
specify library version info [each variable defaults to 0]
-weak LIBNAME declare that the target provides the LIBNAME interface
-Wc,FLAG
-Xcompiler FLAG pass linker-specific FLAG directly to the compiler
-Wl,FLAG
-Xlinker FLAG pass linker-specific FLAG directly to the linker
-XCClinker FLAG pass link-specific FLAG to the compiler driver (CC)
All other options (arguments beginning with '-') are ignored.
Every other argument is treated as a filename. Files ending in '.la' are
treated as uninstalled libtool libraries, other files are standard or library
object files.
If the OUTPUT-FILE ends in '.la', then a libtool library is created,
only library objects ('.lo' files) may be specified, and '-rpath' is
required, except when creating a convenience library.
If OUTPUT-FILE ends in '.a' or '.lib', then a standard library is created
using 'ar' and 'ranlib', or on Windows using 'lib'.
If OUTPUT-FILE ends in '.lo' or '.$objext', then a reloadable object file
is created, otherwise an executable program is created."
;;
uninstall)
$ECHO \
"Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE...
Remove libraries from an installation directory.
RM is the name of the program to use to delete files associated with each FILE
(typically '/bin/rm'). RM-OPTIONS are options (such as '-f') to be passed
to RM.
If FILE is a libtool library, all the files associated with it are deleted.
Otherwise, only FILE itself is deleted using RM."
;;
*)
func_fatal_help "invalid operation mode '$opt_mode'"
;;
esac
echo
$ECHO "Try '$progname --help' for more information about other modes."
}
# Now that we've collected a possible --mode arg, show help if necessary
if $opt_help; then
if test : = "$opt_help"; then
func_mode_help
else
{
func_help noexit
for opt_mode in compile link execute install finish uninstall clean; do
func_mode_help
done
} | $SED -n '1p; 2,$s/^Usage:/ or: /p'
{
func_help noexit
for opt_mode in compile link execute install finish uninstall clean; do
echo
func_mode_help
done
} |
$SED '1d
/^When reporting/,/^Report/{
H
d
}
$x
/information about other modes/d
/more detailed .*MODE/d
s/^Usage:.*--mode=\([^ ]*\) .*/Description of \1 mode:/'
fi
exit $?
fi
# func_mode_execute arg...
func_mode_execute ()
{
$debug_cmd
# The first argument is the command name.
cmd=$nonopt
test -z "$cmd" && \
func_fatal_help "you must specify a COMMAND"
# Handle -dlopen flags immediately.
for file in $opt_dlopen; do
test -f "$file" \
|| func_fatal_help "'$file' is not a file"
dir=
case $file in
*.la)
func_resolve_sysroot "$file"
file=$func_resolve_sysroot_result
# Check to see that this really is a libtool archive.
func_lalib_unsafe_p "$file" \
|| func_fatal_help "'$lib' is not a valid libtool archive"
# Read the libtool library.
dlname=
library_names=
func_source "$file"
# Skip this library if it cannot be dlopened.
if test -z "$dlname"; then
# Warn if it was a shared library.
test -n "$library_names" && \
func_warning "'$file' was not linked with '-export-dynamic'"
continue
fi
func_dirname "$file" "" "."
dir=$func_dirname_result
if test -f "$dir/$objdir/$dlname"; then
func_append dir "/$objdir"
else
if test ! -f "$dir/$dlname"; then
func_fatal_error "cannot find '$dlname' in '$dir' or '$dir/$objdir'"
fi
fi
;;
*.lo)
# Just add the directory containing the .lo file.
func_dirname "$file" "" "."
dir=$func_dirname_result
;;
*)
func_warning "'-dlopen' is ignored for non-libtool libraries and objects"
continue
;;
esac
# Get the absolute pathname.
absdir=`cd "$dir" && pwd`
test -n "$absdir" && dir=$absdir
# Now add the directory to shlibpath_var.
if eval "test -z \"\$$shlibpath_var\""; then
eval "$shlibpath_var=\"\$dir\""
else
eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\""
fi
done
# This variable tells wrapper scripts just to set shlibpath_var
# rather than running their programs.
libtool_execute_magic=$magic
# Check if any of the arguments is a wrapper script.
args=
for file
do
case $file in
-* | *.la | *.lo ) ;;
*)
# Do a test to see if this is really a libtool program.
if func_ltwrapper_script_p "$file"; then
func_source "$file"
# Transform arg to wrapped name.
file=$progdir/$program
elif func_ltwrapper_executable_p "$file"; then
func_ltwrapper_scriptname "$file"
func_source "$func_ltwrapper_scriptname_result"
# Transform arg to wrapped name.
file=$progdir/$program
fi
;;
esac
# Quote arguments (to preserve shell metacharacters).
func_append_quoted args "$file"
done
if $opt_dry_run; then
# Display what would be done.
if test -n "$shlibpath_var"; then
eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\""
echo "export $shlibpath_var"
fi
$ECHO "$cmd$args"
exit $EXIT_SUCCESS
else
if test -n "$shlibpath_var"; then
# Export the shlibpath_var.
eval "export $shlibpath_var"
fi
# Restore saved environment variables
for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES
do
eval "if test \"\${save_$lt_var+set}\" = set; then
$lt_var=\$save_$lt_var; export $lt_var
else
$lt_unset $lt_var
fi"
done
# Now prepare to actually exec the command.
exec_cmd=\$cmd$args
fi
}
test execute = "$opt_mode" && func_mode_execute ${1+"$@"}
# func_mode_finish arg...
func_mode_finish ()
{
$debug_cmd
libs=
libdirs=
admincmds=
for opt in "$nonopt" ${1+"$@"}
do
if test -d "$opt"; then
func_append libdirs " $opt"
elif test -f "$opt"; then
if func_lalib_unsafe_p "$opt"; then
func_append libs " $opt"
else
func_warning "'$opt' is not a valid libtool archive"
fi
else
func_fatal_error "invalid argument '$opt'"
fi
done
if test -n "$libs"; then
if test -n "$lt_sysroot"; then
sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"`
sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;"
else
sysroot_cmd=
fi
# Remove sysroot references
if $opt_dry_run; then
for lib in $libs; do
echo "removing references to $lt_sysroot and '=' prefixes from $lib"
done
else
tmpdir=`func_mktempdir`
for lib in $libs; do
$SED -e "$sysroot_cmd s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \
> $tmpdir/tmp-la
mv -f $tmpdir/tmp-la $lib
done
${RM}r "$tmpdir"
fi
fi
if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
for libdir in $libdirs; do
if test -n "$finish_cmds"; then
# Do each command in the finish commands.
func_execute_cmds "$finish_cmds" 'admincmds="$admincmds
'"$cmd"'"'
fi
if test -n "$finish_eval"; then
# Do the single finish_eval.
eval cmds=\"$finish_eval\"
$opt_dry_run || eval "$cmds" || func_append admincmds "
$cmds"
fi
done
fi
# Exit here if they wanted silent mode.
$opt_quiet && exit $EXIT_SUCCESS
if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
echo "----------------------------------------------------------------------"
echo "Libraries have been installed in:"
for libdir in $libdirs; do
$ECHO " $libdir"
done
echo
echo "If you ever happen to want to link against installed libraries"
echo "in a given directory, LIBDIR, you must either use libtool, and"
echo "specify the full pathname of the library, or use the '-LLIBDIR'"
echo "flag during linking and do at least one of the following:"
if test -n "$shlibpath_var"; then
echo " - add LIBDIR to the '$shlibpath_var' environment variable"
echo " during execution"
fi
if test -n "$runpath_var"; then
echo " - add LIBDIR to the '$runpath_var' environment variable"
echo " during linking"
fi
if test -n "$hardcode_libdir_flag_spec"; then
libdir=LIBDIR
eval flag=\"$hardcode_libdir_flag_spec\"
$ECHO " - use the '$flag' linker flag"
fi
if test -n "$admincmds"; then
$ECHO " - have your system administrator run these commands:$admincmds"
fi
if test -f /etc/ld.so.conf; then
echo " - have your system administrator add LIBDIR to '/etc/ld.so.conf'"
fi
echo
echo "See any operating system documentation about shared libraries for"
case $host in
solaris2.[6789]|solaris2.1[0-9])
echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
echo "pages."
;;
*)
echo "more information, such as the ld(1) and ld.so(8) manual pages."
;;
esac
echo "----------------------------------------------------------------------"
fi
exit $EXIT_SUCCESS
}
test finish = "$opt_mode" && func_mode_finish ${1+"$@"}
# func_mode_install arg...
func_mode_install ()
{
$debug_cmd
# There may be an optional sh(1) argument at the beginning of
# install_prog (especially on Windows NT).
if test "$SHELL" = "$nonopt" || test /bin/sh = "$nonopt" ||
# Allow the use of GNU shtool's install command.
case $nonopt in *shtool*) :;; *) false;; esac
then
# Aesthetically quote it.
func_quote_for_eval "$nonopt"
install_prog="$func_quote_for_eval_result "
arg=$1
shift
else
install_prog=
arg=$nonopt
fi
# The real first argument should be the name of the installation program.
# Aesthetically quote it.
func_quote_for_eval "$arg"
func_append install_prog "$func_quote_for_eval_result"
install_shared_prog=$install_prog
case " $install_prog " in
*[\\\ /]cp\ *) install_cp=: ;;
*) install_cp=false ;;
esac
# We need to accept at least all the BSD install flags.
dest=
files=
opts=
prev=
install_type=
isdir=false
stripme=
no_mode=:
for arg
do
arg2=
if test -n "$dest"; then
func_append files " $dest"
dest=$arg
continue
fi
case $arg in
-d) isdir=: ;;
-f)
if $install_cp; then :; else
prev=$arg
fi
;;
-g | -m | -o)
prev=$arg
;;
-s)
stripme=" -s"
continue
;;
-*)
;;
*)
# If the previous option needed an argument, then skip it.
if test -n "$prev"; then
if test X-m = "X$prev" && test -n "$install_override_mode"; then
arg2=$install_override_mode
no_mode=false
fi
prev=
else
dest=$arg
continue
fi
;;
esac
# Aesthetically quote the argument.
func_quote_for_eval "$arg"
func_append install_prog " $func_quote_for_eval_result"
if test -n "$arg2"; then
func_quote_for_eval "$arg2"
fi
func_append install_shared_prog " $func_quote_for_eval_result"
done
test -z "$install_prog" && \
func_fatal_help "you must specify an install program"
test -n "$prev" && \
func_fatal_help "the '$prev' option requires an argument"
if test -n "$install_override_mode" && $no_mode; then
if $install_cp; then :; else
func_quote_for_eval "$install_override_mode"
func_append install_shared_prog " -m $func_quote_for_eval_result"
fi
fi
if test -z "$files"; then
if test -z "$dest"; then
func_fatal_help "no file or destination specified"
else
func_fatal_help "you must specify a destination"
fi
fi
# Strip any trailing slash from the destination.
func_stripname '' '/' "$dest"
dest=$func_stripname_result
# Check to see that the destination is a directory.
test -d "$dest" && isdir=:
if $isdir; then
destdir=$dest
destname=
else
func_dirname_and_basename "$dest" "" "."
destdir=$func_dirname_result
destname=$func_basename_result
# Not a directory, so check to see that there is only one file specified.
set dummy $files; shift
test "$#" -gt 1 && \
func_fatal_help "'$dest' is not a directory"
fi
case $destdir in
[\\/]* | [A-Za-z]:[\\/]*) ;;
*)
for file in $files; do
case $file in
*.lo) ;;
*)
func_fatal_help "'$destdir' must be an absolute directory name"
;;
esac
done
;;
esac
# This variable tells wrapper scripts just to set variables rather
# than running their programs.
libtool_install_magic=$magic
staticlibs=
future_libdirs=
current_libdirs=
for file in $files; do
# Do each installation.
case $file in
*.$libext)
# Do the static libraries later.
func_append staticlibs " $file"
;;
*.la)
func_resolve_sysroot "$file"
file=$func_resolve_sysroot_result
# Check to see that this really is a libtool archive.
func_lalib_unsafe_p "$file" \
|| func_fatal_help "'$file' is not a valid libtool archive"
library_names=
old_library=
relink_command=
func_source "$file"
# Add the libdir to current_libdirs if it is the destination.
if test "X$destdir" = "X$libdir"; then
case "$current_libdirs " in
*" $libdir "*) ;;
*) func_append current_libdirs " $libdir" ;;
esac
else
# Note the libdir as a future libdir.
case "$future_libdirs " in
*" $libdir "*) ;;
*) func_append future_libdirs " $libdir" ;;
esac
fi
func_dirname "$file" "/" ""
dir=$func_dirname_result
func_append dir "$objdir"
if test -n "$relink_command"; then
# Determine the prefix the user has applied to our future dir.
inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"`
# Don't allow the user to place us outside of our expected
# location b/c this prevents finding dependent libraries that
# are installed to the same prefix.
# At present, this check doesn't affect windows .dll's that
# are installed into $libdir/../bin (currently, that works fine)
# but it's something to keep an eye on.
test "$inst_prefix_dir" = "$destdir" && \
func_fatal_error "error: cannot install '$file' to a directory not ending in $libdir"
if test -n "$inst_prefix_dir"; then
# Stick the inst_prefix_dir data into the link command.
relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"`
else
relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%%"`
fi
func_warning "relinking '$file'"
func_show_eval "$relink_command" \
'func_fatal_error "error: relink '\''$file'\'' with the above command before installing it"'
fi
# See the names of the shared library.
set dummy $library_names; shift
if test -n "$1"; then
realname=$1
shift
srcname=$realname
test -n "$relink_command" && srcname=${realname}T
# Install the shared library and build the symlinks.
func_show_eval "$install_shared_prog $dir/$srcname $destdir/$realname" \
'exit $?'
tstripme=$stripme
case $host_os in
cygwin* | mingw* | pw32* | cegcc*)
case $realname in
*.dll.a)
tstripme=
;;
esac
;;
os2*)
case $realname in
*_dll.a)
tstripme=
;;
esac
;;
esac
if test -n "$tstripme" && test -n "$striplib"; then
func_show_eval "$striplib $destdir/$realname" 'exit $?'
fi
if test "$#" -gt 0; then
# Delete the old symlinks, and create new ones.
# Try 'ln -sf' first, because the 'ln' binary might depend on
# the symlink we replace! Solaris /bin/ln does not understand -f,
# so we also need to try rm && ln -s.
for linkname
do
test "$linkname" != "$realname" \
&& func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })"
done
fi
# Do each command in the postinstall commands.
lib=$destdir/$realname
func_execute_cmds "$postinstall_cmds" 'exit $?'
fi
# Install the pseudo-library for information purposes.
func_basename "$file"
name=$func_basename_result
instname=$dir/${name}i
func_show_eval "$install_prog $instname $destdir/$name" 'exit $?'
# Maybe install the static library, too.
test -n "$old_library" && func_append staticlibs " $dir/$old_library"
;;
*.lo)
# Install (i.e. copy) a libtool object.
# Figure out destination file name, if it wasn't already specified.
if test -n "$destname"; then
destfile=$destdir/$destname
else
func_basename "$file"
destfile=$func_basename_result
destfile=$destdir/$destfile
fi
# Deduce the name of the destination old-style object file.
case $destfile in
*.lo)
func_lo2o "$destfile"
staticdest=$func_lo2o_result
;;
*.$objext)
staticdest=$destfile
destfile=
;;
*)
func_fatal_help "cannot copy a libtool object to '$destfile'"
;;
esac
# Install the libtool object if requested.
test -n "$destfile" && \
func_show_eval "$install_prog $file $destfile" 'exit $?'
# Install the old object if enabled.
if test yes = "$build_old_libs"; then
# Deduce the name of the old-style object file.
func_lo2o "$file"
staticobj=$func_lo2o_result
func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?'
fi
exit $EXIT_SUCCESS
;;
*)
# Figure out destination file name, if it wasn't already specified.
if test -n "$destname"; then
destfile=$destdir/$destname
else
func_basename "$file"
destfile=$func_basename_result
destfile=$destdir/$destfile
fi
# If the file is missing, and there is a .exe on the end, strip it
# because it is most likely a libtool script we actually want to
# install
stripped_ext=
case $file in
*.exe)
if test ! -f "$file"; then
func_stripname '' '.exe' "$file"
file=$func_stripname_result
stripped_ext=.exe
fi
;;
esac
# Do a test to see if this is really a libtool program.
case $host in
*cygwin* | *mingw*)
if func_ltwrapper_executable_p "$file"; then
func_ltwrapper_scriptname "$file"
wrapper=$func_ltwrapper_scriptname_result
else
func_stripname '' '.exe' "$file"
wrapper=$func_stripname_result
fi
;;
*)
wrapper=$file
;;
esac
if func_ltwrapper_script_p "$wrapper"; then
notinst_deplibs=
relink_command=
func_source "$wrapper"
# Check the variables that should have been set.
test -z "$generated_by_libtool_version" && \
func_fatal_error "invalid libtool wrapper script '$wrapper'"
finalize=:
for lib in $notinst_deplibs; do
# Check to see that each library is installed.
libdir=
if test -f "$lib"; then
func_source "$lib"
fi
libfile=$libdir/`$ECHO "$lib" | $SED 's%^.*/%%g'`
if test -n "$libdir" && test ! -f "$libfile"; then
func_warning "'$lib' has not been installed in '$libdir'"
finalize=false
fi
done
relink_command=
func_source "$wrapper"
outputname=
if test no = "$fast_install" && test -n "$relink_command"; then
$opt_dry_run || {
if $finalize; then
tmpdir=`func_mktempdir`
func_basename "$file$stripped_ext"
file=$func_basename_result
outputname=$tmpdir/$file
# Replace the output file specification.
relink_command=`$ECHO "$relink_command" | $SED 's%@OUTPUT@%'"$outputname"'%g'`
$opt_quiet || {
func_quote_for_expand "$relink_command"
eval "func_echo $func_quote_for_expand_result"
}
if eval "$relink_command"; then :
else
func_error "error: relink '$file' with the above command before installing it"
$opt_dry_run || ${RM}r "$tmpdir"
continue
fi
file=$outputname
else
func_warning "cannot relink '$file'"
fi
}
else
# Install the binary that we compiled earlier.
file=`$ECHO "$file$stripped_ext" | $SED "s%\([^/]*\)$%$objdir/\1%"`
fi
fi
# remove .exe since cygwin /usr/bin/install will append another
# one anyway
case $install_prog,$host in
*/usr/bin/install*,*cygwin*)
case $file:$destfile in
*.exe:*.exe)
# this is ok
;;
*.exe:*)
destfile=$destfile.exe
;;
*:*.exe)
func_stripname '' '.exe' "$destfile"
destfile=$func_stripname_result
;;
esac
;;
esac
func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?'
$opt_dry_run || if test -n "$outputname"; then
${RM}r "$tmpdir"
fi
;;
esac
done
for file in $staticlibs; do
func_basename "$file"
name=$func_basename_result
# Set up the ranlib parameters.
oldlib=$destdir/$name
func_to_tool_file "$oldlib" func_convert_file_msys_to_w32
tool_oldlib=$func_to_tool_file_result
func_show_eval "$install_prog \$file \$oldlib" 'exit $?'
if test -n "$stripme" && test -n "$old_striplib"; then
func_show_eval "$old_striplib $tool_oldlib" 'exit $?'
fi
# Do each command in the postinstall commands.
func_execute_cmds "$old_postinstall_cmds" 'exit $?'
done
test -n "$future_libdirs" && \
func_warning "remember to run '$progname --finish$future_libdirs'"
if test -n "$current_libdirs"; then
# Maybe just do a dry run.
$opt_dry_run && current_libdirs=" -n$current_libdirs"
exec_cmd='$SHELL "$progpath" $preserve_args --finish$current_libdirs'
else
exit $EXIT_SUCCESS
fi
}
test install = "$opt_mode" && func_mode_install ${1+"$@"}
# func_generate_dlsyms outputname originator pic_p
# Extract symbols from dlprefiles and create ${outputname}S.o with
# a dlpreopen symbol table.
func_generate_dlsyms ()
{
$debug_cmd
my_outputname=$1
my_originator=$2
my_pic_p=${3-false}
my_prefix=`$ECHO "$my_originator" | $SED 's%[^a-zA-Z0-9]%_%g'`
my_dlsyms=
if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then
if test -n "$NM" && test -n "$global_symbol_pipe"; then
my_dlsyms=${my_outputname}S.c
else
func_error "not configured to extract global symbols from dlpreopened files"
fi
fi
if test -n "$my_dlsyms"; then
case $my_dlsyms in
"") ;;
*.c)
# Discover the nlist of each of the dlfiles.
nlist=$output_objdir/$my_outputname.nm
func_show_eval "$RM $nlist ${nlist}S ${nlist}T"
# Parse the name list into a source file.
func_verbose "creating $output_objdir/$my_dlsyms"
$opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\
/* $my_dlsyms - symbol resolution table for '$my_outputname' dlsym emulation. */
/* Generated by $PROGRAM (GNU $PACKAGE) $VERSION */
#ifdef __cplusplus
extern \"C\" {
#endif
#if defined __GNUC__ && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 4)) || (__GNUC__ > 4))
#pragma GCC diagnostic ignored \"-Wstrict-prototypes\"
#endif
/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */
#if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE
/* DATA imports from DLLs on WIN32 can't be const, because runtime
relocations are performed -- see ld's documentation on pseudo-relocs. */
# define LT_DLSYM_CONST
#elif defined __osf__
/* This system does not cope well with relocations in const data. */
# define LT_DLSYM_CONST
#else
# define LT_DLSYM_CONST const
#endif
#define STREQ(s1, s2) (strcmp ((s1), (s2)) == 0)
/* External symbol declarations for the compiler. */\
"
if test yes = "$dlself"; then
func_verbose "generating symbol list for '$output'"
$opt_dry_run || echo ': @PROGRAM@ ' > "$nlist"
# Add our own program objects to the symbol list.
progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP`
for progfile in $progfiles; do
func_to_tool_file "$progfile" func_convert_file_msys_to_w32
func_verbose "extracting global C symbols from '$func_to_tool_file_result'"
$opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'"
done
if test -n "$exclude_expsyms"; then
$opt_dry_run || {
eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T'
eval '$MV "$nlist"T "$nlist"'
}
fi
if test -n "$export_symbols_regex"; then
$opt_dry_run || {
eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T'
eval '$MV "$nlist"T "$nlist"'
}
fi
# Prepare the list of exported symbols
if test -z "$export_symbols"; then
export_symbols=$output_objdir/$outputname.exp
$opt_dry_run || {
$RM $export_symbols
eval "$SED -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"'
case $host in
*cygwin* | *mingw* | *cegcc* )
eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"'
;;
esac
}
else
$opt_dry_run || {
eval "$SED -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"'
eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T'
eval '$MV "$nlist"T "$nlist"'
case $host in
*cygwin* | *mingw* | *cegcc* )
eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
eval 'cat "$nlist" >> "$output_objdir/$outputname.def"'
;;
esac
}
fi
fi
for dlprefile in $dlprefiles; do
func_verbose "extracting global C symbols from '$dlprefile'"
func_basename "$dlprefile"
name=$func_basename_result
case $host in
*cygwin* | *mingw* | *cegcc* )
# if an import library, we need to obtain dlname
if func_win32_import_lib_p "$dlprefile"; then
func_tr_sh "$dlprefile"
eval "curr_lafile=\$libfile_$func_tr_sh_result"
dlprefile_dlbasename=
if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then
# Use subshell, to avoid clobbering current variable values
dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"`
if test -n "$dlprefile_dlname"; then
func_basename "$dlprefile_dlname"
dlprefile_dlbasename=$func_basename_result
else
# no lafile. user explicitly requested -dlpreopen .
$sharedlib_from_linklib_cmd "$dlprefile"
dlprefile_dlbasename=$sharedlib_from_linklib_result
fi
fi
$opt_dry_run || {
if test -n "$dlprefile_dlbasename"; then
eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"'
else
func_warning "Could not compute DLL name from $name"
eval '$ECHO ": $name " >> "$nlist"'
fi
func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe |
$SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'"
}
else # not an import lib
$opt_dry_run || {
eval '$ECHO ": $name " >> "$nlist"'
func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
}
fi
;;
*)
$opt_dry_run || {
eval '$ECHO ": $name " >> "$nlist"'
func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
}
;;
esac
done
$opt_dry_run || {
# Make sure we have at least an empty file.
test -f "$nlist" || : > "$nlist"
if test -n "$exclude_expsyms"; then
$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T
$MV "$nlist"T "$nlist"
fi
# Try sorting and uniquifying the output.
if $GREP -v "^: " < "$nlist" |
if sort -k 3 /dev/null 2>&1; then
sort -k 3
else
sort +2
fi |
uniq > "$nlist"S; then
:
else
$GREP -v "^: " < "$nlist" > "$nlist"S
fi
if test -f "$nlist"S; then
eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"'
else
echo '/* NONE */' >> "$output_objdir/$my_dlsyms"
fi
func_show_eval '$RM "${nlist}I"'
if test -n "$global_symbol_to_import"; then
eval "$global_symbol_to_import"' < "$nlist"S > "$nlist"I'
fi
echo >> "$output_objdir/$my_dlsyms" "\
/* The mapping between symbol names and symbols. */
typedef struct {
const char *name;
void *address;
} lt_dlsymlist;
extern LT_DLSYM_CONST lt_dlsymlist
lt_${my_prefix}_LTX_preloaded_symbols[];\
"
if test -s "$nlist"I; then
echo >> "$output_objdir/$my_dlsyms" "\
static void lt_syminit(void)
{
LT_DLSYM_CONST lt_dlsymlist *symbol = lt_${my_prefix}_LTX_preloaded_symbols;
for (; symbol->name; ++symbol)
{"
$SED 's/.*/ if (STREQ (symbol->name, \"&\")) symbol->address = (void *) \&&;/' < "$nlist"I >> "$output_objdir/$my_dlsyms"
echo >> "$output_objdir/$my_dlsyms" "\
}
}"
fi
echo >> "$output_objdir/$my_dlsyms" "\
LT_DLSYM_CONST lt_dlsymlist
lt_${my_prefix}_LTX_preloaded_symbols[] =
{ {\"$my_originator\", (void *) 0},"
if test -s "$nlist"I; then
echo >> "$output_objdir/$my_dlsyms" "\
{\"@INIT@\", (void *) <_syminit},"
fi
case $need_lib_prefix in
no)
eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms"
;;
*)
eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms"
;;
esac
echo >> "$output_objdir/$my_dlsyms" "\
{0, (void *) 0}
};
/* This works around a problem in FreeBSD linker */
#ifdef FREEBSD_WORKAROUND
static const void *lt_preloaded_setup() {
return lt_${my_prefix}_LTX_preloaded_symbols;
}
#endif
#ifdef __cplusplus
}
#endif\
"
} # !$opt_dry_run
pic_flag_for_symtable=
case "$compile_command " in
*" -static "*) ;;
*)
case $host in
# compiling the symbol table file with pic_flag works around
# a FreeBSD bug that causes programs to crash when -lm is
# linked before any other PIC object. But we must not use
# pic_flag when linking with -static. The problem exists in
# FreeBSD 2.2.6 and is fixed in FreeBSD 3.1.
*-*-freebsd2.*|*-*-freebsd3.0*|*-*-freebsdelf3.0*)
pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;;
*-*-hpux*)
pic_flag_for_symtable=" $pic_flag" ;;
*)
$my_pic_p && pic_flag_for_symtable=" $pic_flag"
;;
esac
;;
esac
symtab_cflags=
for arg in $LTCFLAGS; do
case $arg in
-pie | -fpie | -fPIE) ;;
*) func_append symtab_cflags " $arg" ;;
esac
done
# Now compile the dynamic symbol file.
func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?'
# Clean up the generated files.
func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T" "${nlist}I"'
# Transform the symbol file into the correct name.
symfileobj=$output_objdir/${my_outputname}S.$objext
case $host in
*cygwin* | *mingw* | *cegcc* )
if test -f "$output_objdir/$my_outputname.def"; then
compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"`
finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"`
else
compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"`
finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"`
fi
;;
*)
compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"`
finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"`
;;
esac
;;
*)
func_fatal_error "unknown suffix for '$my_dlsyms'"
;;
esac
else
# We keep going just in case the user didn't refer to
# lt_preloaded_symbols. The linker will fail if global_symbol_pipe
# really was required.
# Nullify the symbol file.
compile_command=`$ECHO "$compile_command" | $SED "s% @SYMFILE@%%"`
finalize_command=`$ECHO "$finalize_command" | $SED "s% @SYMFILE@%%"`
fi
}
# func_cygming_gnu_implib_p ARG
# This predicate returns with zero status (TRUE) if
# ARG is a GNU/binutils-style import library. Returns
# with nonzero status (FALSE) otherwise.
func_cygming_gnu_implib_p ()
{
$debug_cmd
func_to_tool_file "$1" func_convert_file_msys_to_w32
func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'`
test -n "$func_cygming_gnu_implib_tmp"
}
# func_cygming_ms_implib_p ARG
# This predicate returns with zero status (TRUE) if
# ARG is an MS-style import library. Returns
# with nonzero status (FALSE) otherwise.
func_cygming_ms_implib_p ()
{
$debug_cmd
func_to_tool_file "$1" func_convert_file_msys_to_w32
func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'`
test -n "$func_cygming_ms_implib_tmp"
}
# func_win32_libid arg
# return the library type of file 'arg'
#
# Need a lot of goo to handle *both* DLLs and import libs
# Has to be a shell function in order to 'eat' the argument
# that is supplied when $file_magic_command is called.
# Despite the name, also deal with 64 bit binaries.
func_win32_libid ()
{
$debug_cmd
win32_libid_type=unknown
win32_fileres=`file -L $1 2>/dev/null`
case $win32_fileres in
*ar\ archive\ import\ library*) # definitely import
win32_libid_type="x86 archive import"
;;
*ar\ archive*) # could be an import, or static
# Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD.
if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null |
$EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
case $nm_interface in
"MS dumpbin")
if func_cygming_ms_implib_p "$1" ||
func_cygming_gnu_implib_p "$1"
then
win32_nmres=import
else
win32_nmres=
fi
;;
*)
func_to_tool_file "$1" func_convert_file_msys_to_w32
win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" |
$SED -n -e '
1,100{
/ I /{
s|.*|import|
p
q
}
}'`
;;
esac
case $win32_nmres in
import*) win32_libid_type="x86 archive import";;
*) win32_libid_type="x86 archive static";;
esac
fi
;;
*DLL*)
win32_libid_type="x86 DLL"
;;
*executable*) # but shell scripts are "executable" too...
case $win32_fileres in
*MS\ Windows\ PE\ Intel*)
win32_libid_type="x86 DLL"
;;
esac
;;
esac
$ECHO "$win32_libid_type"
}
# func_cygming_dll_for_implib ARG
#
# Platform-specific function to extract the
# name of the DLL associated with the specified
# import library ARG.
# Invoked by eval'ing the libtool variable
# $sharedlib_from_linklib_cmd
# Result is available in the variable
# $sharedlib_from_linklib_result
func_cygming_dll_for_implib ()
{
$debug_cmd
sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"`
}
# func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs
#
# The is the core of a fallback implementation of a
# platform-specific function to extract the name of the
# DLL associated with the specified import library LIBNAME.
#
# SECTION_NAME is either .idata$6 or .idata$7, depending
# on the platform and compiler that created the implib.
#
# Echos the name of the DLL associated with the
# specified import library.
func_cygming_dll_for_implib_fallback_core ()
{
$debug_cmd
match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"`
$OBJDUMP -s --section "$1" "$2" 2>/dev/null |
$SED '/^Contents of section '"$match_literal"':/{
# Place marker at beginning of archive member dllname section
s/.*/====MARK====/
p
d
}
# These lines can sometimes be longer than 43 characters, but
# are always uninteresting
/:[ ]*file format pe[i]\{,1\}-/d
/^In archive [^:]*:/d
# Ensure marker is printed
/^====MARK====/p
# Remove all lines with less than 43 characters
/^.\{43\}/!d
# From remaining lines, remove first 43 characters
s/^.\{43\}//' |
$SED -n '
# Join marker and all lines until next marker into a single line
/^====MARK====/ b para
H
$ b para
b
:para
x
s/\n//g
# Remove the marker
s/^====MARK====//
# Remove trailing dots and whitespace
s/[\. \t]*$//
# Print
/./p' |
# we now have a list, one entry per line, of the stringified
# contents of the appropriate section of all members of the
# archive that possess that section. Heuristic: eliminate
# all those that have a first or second character that is
# a '.' (that is, objdump's representation of an unprintable
# character.) This should work for all archives with less than
# 0x302f exports -- but will fail for DLLs whose name actually
# begins with a literal '.' or a single character followed by
# a '.'.
#
# Of those that remain, print the first one.
$SED -e '/^\./d;/^.\./d;q'
}
# func_cygming_dll_for_implib_fallback ARG
# Platform-specific function to extract the
# name of the DLL associated with the specified
# import library ARG.
#
# This fallback implementation is for use when $DLLTOOL
# does not support the --identify-strict option.
# Invoked by eval'ing the libtool variable
# $sharedlib_from_linklib_cmd
# Result is available in the variable
# $sharedlib_from_linklib_result
func_cygming_dll_for_implib_fallback ()
{
$debug_cmd
if func_cygming_gnu_implib_p "$1"; then
# binutils import library
sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"`
elif func_cygming_ms_implib_p "$1"; then
# ms-generated import library
sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"`
else
# unknown
sharedlib_from_linklib_result=
fi
}
# func_extract_an_archive dir oldlib
func_extract_an_archive ()
{
$debug_cmd
f_ex_an_ar_dir=$1; shift
f_ex_an_ar_oldlib=$1
if test yes = "$lock_old_archive_extraction"; then
lockfile=$f_ex_an_ar_oldlib.lock
until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do
func_echo "Waiting for $lockfile to be removed"
sleep 2
done
fi
func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" \
'stat=$?; rm -f "$lockfile"; exit $stat'
if test yes = "$lock_old_archive_extraction"; then
$opt_dry_run || rm -f "$lockfile"
fi
if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then
:
else
func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib"
fi
}
# func_extract_archives gentop oldlib ...
func_extract_archives ()
{
$debug_cmd
my_gentop=$1; shift
my_oldlibs=${1+"$@"}
my_oldobjs=
my_xlib=
my_xabs=
my_xdir=
for my_xlib in $my_oldlibs; do
# Extract the objects.
case $my_xlib in
[\\/]* | [A-Za-z]:[\\/]*) my_xabs=$my_xlib ;;
*) my_xabs=`pwd`"/$my_xlib" ;;
esac
func_basename "$my_xlib"
my_xlib=$func_basename_result
my_xlib_u=$my_xlib
while :; do
case " $extracted_archives " in
*" $my_xlib_u "*)
func_arith $extracted_serial + 1
extracted_serial=$func_arith_result
my_xlib_u=lt$extracted_serial-$my_xlib ;;
*) break ;;
esac
done
extracted_archives="$extracted_archives $my_xlib_u"
my_xdir=$my_gentop/$my_xlib_u
func_mkdir_p "$my_xdir"
case $host in
*-darwin*)
func_verbose "Extracting $my_xabs"
# Do not bother doing anything if just a dry run
$opt_dry_run || {
darwin_orig_dir=`pwd`
cd $my_xdir || exit $?
darwin_archive=$my_xabs
darwin_curdir=`pwd`
func_basename "$darwin_archive"
darwin_base_archive=$func_basename_result
darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true`
if test -n "$darwin_arches"; then
darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'`
darwin_arch=
func_verbose "$darwin_base_archive has multiple architectures $darwin_arches"
for darwin_arch in $darwin_arches; do
func_mkdir_p "unfat-$$/$darwin_base_archive-$darwin_arch"
$LIPO -thin $darwin_arch -output "unfat-$$/$darwin_base_archive-$darwin_arch/$darwin_base_archive" "$darwin_archive"
cd "unfat-$$/$darwin_base_archive-$darwin_arch"
func_extract_an_archive "`pwd`" "$darwin_base_archive"
cd "$darwin_curdir"
$RM "unfat-$$/$darwin_base_archive-$darwin_arch/$darwin_base_archive"
done # $darwin_arches
## Okay now we've a bunch of thin objects, gotta fatten them up :)
darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$sed_basename" | sort -u`
darwin_file=
darwin_files=
for darwin_file in $darwin_filelist; do
darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP`
$LIPO -create -output "$darwin_file" $darwin_files
done # $darwin_filelist
$RM -rf unfat-$$
cd "$darwin_orig_dir"
else
cd $darwin_orig_dir
func_extract_an_archive "$my_xdir" "$my_xabs"
fi # $darwin_arches
} # !$opt_dry_run
;;
*)
func_extract_an_archive "$my_xdir" "$my_xabs"
;;
esac
my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP`
done
func_extract_archives_result=$my_oldobjs
}
# func_emit_wrapper [arg=no]
#
# Emit a libtool wrapper script on stdout.
# Don't directly open a file because we may want to
# incorporate the script contents within a cygwin/mingw
# wrapper executable. Must ONLY be called from within
# func_mode_link because it depends on a number of variables
# set therein.
#
# ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR
# variable will take. If 'yes', then the emitted script
# will assume that the directory where it is stored is
# the $objdir directory. This is a cygwin/mingw-specific
# behavior.
func_emit_wrapper ()
{
func_emit_wrapper_arg1=${1-no}
$ECHO "\
#! $SHELL
# $output - temporary wrapper script for $objdir/$outputname
# Generated by $PROGRAM (GNU $PACKAGE) $VERSION
#
# The $output program cannot be directly executed until all the libtool
# libraries that it depends on are installed.
#
# This wrapper script should never be moved out of the build directory.
# If it is, it will not operate correctly.
# Sed substitution that helps us do robust quoting. It backslashifies
# metacharacters that are still active within double-quoted strings.
sed_quote_subst='$sed_quote_subst'
# Be Bourne compatible
if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then
emulate sh
NULLCMD=:
# Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which
# is contrary to our usage. Disable this feature.
alias -g '\${1+\"\$@\"}'='\"\$@\"'
setopt NO_GLOB_SUBST
else
case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac
fi
BIN_SH=xpg4; export BIN_SH # for Tru64
DUALCASE=1; export DUALCASE # for MKS sh
# The HP-UX ksh and POSIX shell print the target directory to stdout
# if CDPATH is set.
(unset CDPATH) >/dev/null 2>&1 && unset CDPATH
relink_command=\"$relink_command\"
# This environment variable determines our operation mode.
if test \"\$libtool_install_magic\" = \"$magic\"; then
# install mode needs the following variables:
generated_by_libtool_version='$macro_version'
notinst_deplibs='$notinst_deplibs'
else
# When we are sourced in execute mode, \$file and \$ECHO are already set.
if test \"\$libtool_execute_magic\" != \"$magic\"; then
file=\"\$0\""
qECHO=`$ECHO "$ECHO" | $SED "$sed_quote_subst"`
$ECHO "\
# A function that is used when there is no print builtin or printf.
func_fallback_echo ()
{
eval 'cat <<_LTECHO_EOF
\$1
_LTECHO_EOF'
}
ECHO=\"$qECHO\"
fi
# Very basic option parsing. These options are (a) specific to
# the libtool wrapper, (b) are identical between the wrapper
# /script/ and the wrapper /executable/ that is used only on
# windows platforms, and (c) all begin with the string "--lt-"
# (application programs are unlikely to have options that match
# this pattern).
#
# There are only two supported options: --lt-debug and
# --lt-dump-script. There is, deliberately, no --lt-help.
#
# The first argument to this parsing function should be the
# script's $0 value, followed by "$@".
lt_option_debug=
func_parse_lt_options ()
{
lt_script_arg0=\$0
shift
for lt_opt
do
case \"\$lt_opt\" in
--lt-debug) lt_option_debug=1 ;;
--lt-dump-script)
lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\`
test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=.
lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\`
cat \"\$lt_dump_D/\$lt_dump_F\"
exit 0
;;
--lt-*)
\$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2
exit 1
;;
esac
done
# Print the debug banner immediately:
if test -n \"\$lt_option_debug\"; then
echo \"$outputname:$output:\$LINENO: libtool wrapper (GNU $PACKAGE) $VERSION\" 1>&2
fi
}
# Used when --lt-debug. Prints its arguments to stdout
# (redirection is the responsibility of the caller)
func_lt_dump_args ()
{
lt_dump_args_N=1;
for lt_arg
do
\$ECHO \"$outputname:$output:\$LINENO: newargv[\$lt_dump_args_N]: \$lt_arg\"
lt_dump_args_N=\`expr \$lt_dump_args_N + 1\`
done
}
# Core function for launching the target application
func_exec_program_core ()
{
"
case $host in
# Backslashes separate directories on plain windows
*-*-mingw | *-*-os2* | *-cegcc*)
$ECHO "\
if test -n \"\$lt_option_debug\"; then
\$ECHO \"$outputname:$output:\$LINENO: newargv[0]: \$progdir\\\\\$program\" 1>&2
func_lt_dump_args \${1+\"\$@\"} 1>&2
fi
exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
"
;;
*)
$ECHO "\
if test -n \"\$lt_option_debug\"; then
\$ECHO \"$outputname:$output:\$LINENO: newargv[0]: \$progdir/\$program\" 1>&2
func_lt_dump_args \${1+\"\$@\"} 1>&2
fi
exec \"\$progdir/\$program\" \${1+\"\$@\"}
"
;;
esac
$ECHO "\
\$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
exit 1
}
# A function to encapsulate launching the target application
# Strips options in the --lt-* namespace from \$@ and
# launches target application with the remaining arguments.
func_exec_program ()
{
case \" \$* \" in
*\\ --lt-*)
for lt_wr_arg
do
case \$lt_wr_arg in
--lt-*) ;;
*) set x \"\$@\" \"\$lt_wr_arg\"; shift;;
esac
shift
done ;;
esac
func_exec_program_core \${1+\"\$@\"}
}
# Parse options
func_parse_lt_options \"\$0\" \${1+\"\$@\"}
# Find the directory that this script lives in.
thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\`
test \"x\$thisdir\" = \"x\$file\" && thisdir=.
# Follow symbolic links until we get to the real thisdir.
file=\`ls -ld \"\$file\" | $SED -n 's/.*-> //p'\`
while test -n \"\$file\"; do
destdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*\$%%'\`
# If there was a directory component, then change thisdir.
if test \"x\$destdir\" != \"x\$file\"; then
case \"\$destdir\" in
[\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;;
*) thisdir=\"\$thisdir/\$destdir\" ;;
esac
fi
file=\`\$ECHO \"\$file\" | $SED 's%^.*/%%'\`
file=\`ls -ld \"\$thisdir/\$file\" | $SED -n 's/.*-> //p'\`
done
# Usually 'no', except on cygwin/mingw when embedded into
# the cwrapper.
WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_arg1
if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then
# special case for '.'
if test \"\$thisdir\" = \".\"; then
thisdir=\`pwd\`
fi
# remove .libs from thisdir
case \"\$thisdir\" in
*[\\\\/]$objdir ) thisdir=\`\$ECHO \"\$thisdir\" | $SED 's%[\\\\/][^\\\\/]*$%%'\` ;;
$objdir ) thisdir=. ;;
esac
fi
# Try to get the absolute directory name.
absdir=\`cd \"\$thisdir\" && pwd\`
test -n \"\$absdir\" && thisdir=\"\$absdir\"
"
if test yes = "$fast_install"; then
$ECHO "\
program=lt-'$outputname'$exeext
progdir=\"\$thisdir/$objdir\"
if test ! -f \"\$progdir/\$program\" ||
{ file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | $SED 1q\`; \\
test \"X\$file\" != \"X\$progdir/\$program\"; }; then
file=\"\$\$-\$program\"
if test ! -d \"\$progdir\"; then
$MKDIR \"\$progdir\"
else
$RM \"\$progdir/\$file\"
fi"
$ECHO "\
# relink executable if necessary
if test -n \"\$relink_command\"; then
if relink_command_output=\`eval \$relink_command 2>&1\`; then :
else
\$ECHO \"\$relink_command_output\" >&2
$RM \"\$progdir/\$file\"
exit 1
fi
fi
$MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null ||
{ $RM \"\$progdir/\$program\";
$MV \"\$progdir/\$file\" \"\$progdir/\$program\"; }
$RM \"\$progdir/\$file\"
fi"
else
$ECHO "\
program='$outputname'
progdir=\"\$thisdir/$objdir\"
"
fi
$ECHO "\
if test -f \"\$progdir/\$program\"; then"
# fixup the dll searchpath if we need to.
#
# Fix the DLL searchpath if we need to. Do this before prepending
# to shlibpath, because on Windows, both are PATH and uninstalled
# libraries must come first.
if test -n "$dllsearchpath"; then
$ECHO "\
# Add the dll search path components to the executable PATH
PATH=$dllsearchpath:\$PATH
"
fi
# Export our shlibpath_var if we have one.
if test yes = "$shlibpath_overrides_runpath" && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
$ECHO "\
# Add our own library path to $shlibpath_var
$shlibpath_var=\"$temp_rpath\$$shlibpath_var\"
# Some systems cannot cope with colon-terminated $shlibpath_var
# The second colon is a workaround for a bug in BeOS R4 sed
$shlibpath_var=\`\$ECHO \"\$$shlibpath_var\" | $SED 's/::*\$//'\`
export $shlibpath_var
"
fi
$ECHO "\
if test \"\$libtool_execute_magic\" != \"$magic\"; then
# Run the actual program with our arguments.
func_exec_program \${1+\"\$@\"}
fi
else
# The program doesn't exist.
\$ECHO \"\$0: error: '\$progdir/\$program' does not exist\" 1>&2
\$ECHO \"This script is just a wrapper for \$program.\" 1>&2
\$ECHO \"See the $PACKAGE documentation for more information.\" 1>&2
exit 1
fi
fi\
"
}
# func_emit_cwrapperexe_src
# emit the source code for a wrapper executable on stdout
# Must ONLY be called from within func_mode_link because
# it depends on a number of variable set therein.
func_emit_cwrapperexe_src ()
{
cat <
#include
#ifdef _MSC_VER
# include
# include
# include
#else
# include
# include
# ifdef __CYGWIN__
# include
# endif
#endif
#include
#include
#include
#include
#include
#include
#include
#include
#define STREQ(s1, s2) (strcmp ((s1), (s2)) == 0)
/* declarations of non-ANSI functions */
#if defined __MINGW32__
# ifdef __STRICT_ANSI__
int _putenv (const char *);
# endif
#elif defined __CYGWIN__
# ifdef __STRICT_ANSI__
char *realpath (const char *, char *);
int putenv (char *);
int setenv (const char *, const char *, int);
# endif
/* #elif defined other_platform || defined ... */
#endif
/* portability defines, excluding path handling macros */
#if defined _MSC_VER
# define setmode _setmode
# define stat _stat
# define chmod _chmod
# define getcwd _getcwd
# define putenv _putenv
# define S_IXUSR _S_IEXEC
#elif defined __MINGW32__
# define setmode _setmode
# define stat _stat
# define chmod _chmod
# define getcwd _getcwd
# define putenv _putenv
#elif defined __CYGWIN__
# define HAVE_SETENV
# define FOPEN_WB "wb"
/* #elif defined other platforms ... */
#endif
#if defined PATH_MAX
# define LT_PATHMAX PATH_MAX
#elif defined MAXPATHLEN
# define LT_PATHMAX MAXPATHLEN
#else
# define LT_PATHMAX 1024
#endif
#ifndef S_IXOTH
# define S_IXOTH 0
#endif
#ifndef S_IXGRP
# define S_IXGRP 0
#endif
/* path handling portability macros */
#ifndef DIR_SEPARATOR
# define DIR_SEPARATOR '/'
# define PATH_SEPARATOR ':'
#endif
#if defined _WIN32 || defined __MSDOS__ || defined __DJGPP__ || \
defined __OS2__
# define HAVE_DOS_BASED_FILE_SYSTEM
# define FOPEN_WB "wb"
# ifndef DIR_SEPARATOR_2
# define DIR_SEPARATOR_2 '\\'
# endif
# ifndef PATH_SEPARATOR_2
# define PATH_SEPARATOR_2 ';'
# endif
#endif
#ifndef DIR_SEPARATOR_2
# define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR)
#else /* DIR_SEPARATOR_2 */
# define IS_DIR_SEPARATOR(ch) \
(((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2))
#endif /* DIR_SEPARATOR_2 */
#ifndef PATH_SEPARATOR_2
# define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR)
#else /* PATH_SEPARATOR_2 */
# define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2)
#endif /* PATH_SEPARATOR_2 */
#ifndef FOPEN_WB
# define FOPEN_WB "w"
#endif
#ifndef _O_BINARY
# define _O_BINARY 0
#endif
#define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type)))
#define XFREE(stale) do { \
if (stale) { free (stale); stale = 0; } \
} while (0)
#if defined LT_DEBUGWRAPPER
static int lt_debug = 1;
#else
static int lt_debug = 0;
#endif
const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */
void *xmalloc (size_t num);
char *xstrdup (const char *string);
const char *base_name (const char *name);
char *find_executable (const char *wrapper);
char *chase_symlinks (const char *pathspec);
int make_executable (const char *path);
int check_executable (const char *path);
char *strendzap (char *str, const char *pat);
void lt_debugprintf (const char *file, int line, const char *fmt, ...);
void lt_fatal (const char *file, int line, const char *message, ...);
static const char *nonnull (const char *s);
static const char *nonempty (const char *s);
void lt_setenv (const char *name, const char *value);
char *lt_extend_str (const char *orig_value, const char *add, int to_end);
void lt_update_exe_path (const char *name, const char *value);
void lt_update_lib_path (const char *name, const char *value);
char **prepare_spawn (char **argv);
void lt_dump_script (FILE *f);
EOF
cat <= 0)
&& (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH)))
return 1;
else
return 0;
}
int
make_executable (const char *path)
{
int rval = 0;
struct stat st;
lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n",
nonempty (path));
if ((!path) || (!*path))
return 0;
if (stat (path, &st) >= 0)
{
rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR);
}
return rval;
}
/* Searches for the full path of the wrapper. Returns
newly allocated full path name if found, NULL otherwise
Does not chase symlinks, even on platforms that support them.
*/
char *
find_executable (const char *wrapper)
{
int has_slash = 0;
const char *p;
const char *p_next;
/* static buffer for getcwd */
char tmp[LT_PATHMAX + 1];
size_t tmp_len;
char *concat_name;
lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n",
nonempty (wrapper));
if ((wrapper == NULL) || (*wrapper == '\0'))
return NULL;
/* Absolute path? */
#if defined HAVE_DOS_BASED_FILE_SYSTEM
if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':')
{
concat_name = xstrdup (wrapper);
if (check_executable (concat_name))
return concat_name;
XFREE (concat_name);
}
else
{
#endif
if (IS_DIR_SEPARATOR (wrapper[0]))
{
concat_name = xstrdup (wrapper);
if (check_executable (concat_name))
return concat_name;
XFREE (concat_name);
}
#if defined HAVE_DOS_BASED_FILE_SYSTEM
}
#endif
for (p = wrapper; *p; p++)
if (*p == '/')
{
has_slash = 1;
break;
}
if (!has_slash)
{
/* no slashes; search PATH */
const char *path = getenv ("PATH");
if (path != NULL)
{
for (p = path; *p; p = p_next)
{
const char *q;
size_t p_len;
for (q = p; *q; q++)
if (IS_PATH_SEPARATOR (*q))
break;
p_len = (size_t) (q - p);
p_next = (*q == '\0' ? q : q + 1);
if (p_len == 0)
{
/* empty path: current directory */
if (getcwd (tmp, LT_PATHMAX) == NULL)
lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
nonnull (strerror (errno)));
tmp_len = strlen (tmp);
concat_name =
XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
memcpy (concat_name, tmp, tmp_len);
concat_name[tmp_len] = '/';
strcpy (concat_name + tmp_len + 1, wrapper);
}
else
{
concat_name =
XMALLOC (char, p_len + 1 + strlen (wrapper) + 1);
memcpy (concat_name, p, p_len);
concat_name[p_len] = '/';
strcpy (concat_name + p_len + 1, wrapper);
}
if (check_executable (concat_name))
return concat_name;
XFREE (concat_name);
}
}
/* not found in PATH; assume curdir */
}
/* Relative path | not found in path: prepend cwd */
if (getcwd (tmp, LT_PATHMAX) == NULL)
lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
nonnull (strerror (errno)));
tmp_len = strlen (tmp);
concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
memcpy (concat_name, tmp, tmp_len);
concat_name[tmp_len] = '/';
strcpy (concat_name + tmp_len + 1, wrapper);
if (check_executable (concat_name))
return concat_name;
XFREE (concat_name);
return NULL;
}
char *
chase_symlinks (const char *pathspec)
{
#ifndef S_ISLNK
return xstrdup (pathspec);
#else
char buf[LT_PATHMAX];
struct stat s;
char *tmp_pathspec = xstrdup (pathspec);
char *p;
int has_symlinks = 0;
while (strlen (tmp_pathspec) && !has_symlinks)
{
lt_debugprintf (__FILE__, __LINE__,
"checking path component for symlinks: %s\n",
tmp_pathspec);
if (lstat (tmp_pathspec, &s) == 0)
{
if (S_ISLNK (s.st_mode) != 0)
{
has_symlinks = 1;
break;
}
/* search backwards for last DIR_SEPARATOR */
p = tmp_pathspec + strlen (tmp_pathspec) - 1;
while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p)))
p--;
if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p)))
{
/* no more DIR_SEPARATORS left */
break;
}
*p = '\0';
}
else
{
lt_fatal (__FILE__, __LINE__,
"error accessing file \"%s\": %s",
tmp_pathspec, nonnull (strerror (errno)));
}
}
XFREE (tmp_pathspec);
if (!has_symlinks)
{
return xstrdup (pathspec);
}
tmp_pathspec = realpath (pathspec, buf);
if (tmp_pathspec == 0)
{
lt_fatal (__FILE__, __LINE__,
"could not follow symlinks for %s", pathspec);
}
return xstrdup (tmp_pathspec);
#endif
}
char *
strendzap (char *str, const char *pat)
{
size_t len, patlen;
assert (str != NULL);
assert (pat != NULL);
len = strlen (str);
patlen = strlen (pat);
if (patlen <= len)
{
str += len - patlen;
if (STREQ (str, pat))
*str = '\0';
}
return str;
}
void
lt_debugprintf (const char *file, int line, const char *fmt, ...)
{
va_list args;
if (lt_debug)
{
(void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line);
va_start (args, fmt);
(void) vfprintf (stderr, fmt, args);
va_end (args);
}
}
static void
lt_error_core (int exit_status, const char *file,
int line, const char *mode,
const char *message, va_list ap)
{
fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode);
vfprintf (stderr, message, ap);
fprintf (stderr, ".\n");
if (exit_status >= 0)
exit (exit_status);
}
void
lt_fatal (const char *file, int line, const char *message, ...)
{
va_list ap;
va_start (ap, message);
lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap);
va_end (ap);
}
static const char *
nonnull (const char *s)
{
return s ? s : "(null)";
}
static const char *
nonempty (const char *s)
{
return (s && !*s) ? "(empty)" : nonnull (s);
}
void
lt_setenv (const char *name, const char *value)
{
lt_debugprintf (__FILE__, __LINE__,
"(lt_setenv) setting '%s' to '%s'\n",
nonnull (name), nonnull (value));
{
#ifdef HAVE_SETENV
/* always make a copy, for consistency with !HAVE_SETENV */
char *str = xstrdup (value);
setenv (name, str, 1);
#else
size_t len = strlen (name) + 1 + strlen (value) + 1;
char *str = XMALLOC (char, len);
sprintf (str, "%s=%s", name, value);
if (putenv (str) != EXIT_SUCCESS)
{
XFREE (str);
}
#endif
}
}
char *
lt_extend_str (const char *orig_value, const char *add, int to_end)
{
char *new_value;
if (orig_value && *orig_value)
{
size_t orig_value_len = strlen (orig_value);
size_t add_len = strlen (add);
new_value = XMALLOC (char, add_len + orig_value_len + 1);
if (to_end)
{
strcpy (new_value, orig_value);
strcpy (new_value + orig_value_len, add);
}
else
{
strcpy (new_value, add);
strcpy (new_value + add_len, orig_value);
}
}
else
{
new_value = xstrdup (add);
}
return new_value;
}
void
lt_update_exe_path (const char *name, const char *value)
{
lt_debugprintf (__FILE__, __LINE__,
"(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
nonnull (name), nonnull (value));
if (name && *name && value && *value)
{
char *new_value = lt_extend_str (getenv (name), value, 0);
/* some systems can't cope with a ':'-terminated path #' */
size_t len = strlen (new_value);
while ((len > 0) && IS_PATH_SEPARATOR (new_value[len-1]))
{
new_value[--len] = '\0';
}
lt_setenv (name, new_value);
XFREE (new_value);
}
}
void
lt_update_lib_path (const char *name, const char *value)
{
lt_debugprintf (__FILE__, __LINE__,
"(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
nonnull (name), nonnull (value));
if (name && *name && value && *value)
{
char *new_value = lt_extend_str (getenv (name), value, 0);
lt_setenv (name, new_value);
XFREE (new_value);
}
}
EOF
case $host_os in
mingw*)
cat <<"EOF"
/* Prepares an argument vector before calling spawn().
Note that spawn() does not by itself call the command interpreter
(getenv ("COMSPEC") != NULL ? getenv ("COMSPEC") :
({ OSVERSIONINFO v; v.dwOSVersionInfoSize = sizeof(OSVERSIONINFO);
GetVersionEx(&v);
v.dwPlatformId == VER_PLATFORM_WIN32_NT;
}) ? "cmd.exe" : "command.com").
Instead it simply concatenates the arguments, separated by ' ', and calls
CreateProcess(). We must quote the arguments since Win32 CreateProcess()
interprets characters like ' ', '\t', '\\', '"' (but not '<' and '>') in a
special way:
- Space and tab are interpreted as delimiters. They are not treated as
delimiters if they are surrounded by double quotes: "...".
- Unescaped double quotes are removed from the input. Their only effect is
that within double quotes, space and tab are treated like normal
characters.
- Backslashes not followed by double quotes are not special.
- But 2*n+1 backslashes followed by a double quote become
n backslashes followed by a double quote (n >= 0):
\" -> "
\\\" -> \"
\\\\\" -> \\"
*/
#define SHELL_SPECIAL_CHARS "\"\\ \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037"
#define SHELL_SPACE_CHARS " \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037"
char **
prepare_spawn (char **argv)
{
size_t argc;
char **new_argv;
size_t i;
/* Count number of arguments. */
for (argc = 0; argv[argc] != NULL; argc++)
;
/* Allocate new argument vector. */
new_argv = XMALLOC (char *, argc + 1);
/* Put quoted arguments into the new argument vector. */
for (i = 0; i < argc; i++)
{
const char *string = argv[i];
if (string[0] == '\0')
new_argv[i] = xstrdup ("\"\"");
else if (strpbrk (string, SHELL_SPECIAL_CHARS) != NULL)
{
int quote_around = (strpbrk (string, SHELL_SPACE_CHARS) != NULL);
size_t length;
unsigned int backslashes;
const char *s;
char *quoted_string;
char *p;
length = 0;
backslashes = 0;
if (quote_around)
length++;
for (s = string; *s != '\0'; s++)
{
char c = *s;
if (c == '"')
length += backslashes + 1;
length++;
if (c == '\\')
backslashes++;
else
backslashes = 0;
}
if (quote_around)
length += backslashes + 1;
quoted_string = XMALLOC (char, length + 1);
p = quoted_string;
backslashes = 0;
if (quote_around)
*p++ = '"';
for (s = string; *s != '\0'; s++)
{
char c = *s;
if (c == '"')
{
unsigned int j;
for (j = backslashes + 1; j > 0; j--)
*p++ = '\\';
}
*p++ = c;
if (c == '\\')
backslashes++;
else
backslashes = 0;
}
if (quote_around)
{
unsigned int j;
for (j = backslashes; j > 0; j--)
*p++ = '\\';
*p++ = '"';
}
*p = '\0';
new_argv[i] = quoted_string;
}
else
new_argv[i] = (char *) string;
}
new_argv[argc] = NULL;
return new_argv;
}
EOF
;;
esac
cat <<"EOF"
void lt_dump_script (FILE* f)
{
EOF
func_emit_wrapper yes |
$SED -n -e '
s/^\(.\{79\}\)\(..*\)/\1\
\2/
h
s/\([\\"]\)/\\\1/g
s/$/\\n/
s/\([^\n]*\).*/ fputs ("\1", f);/p
g
D'
cat <<"EOF"
}
EOF
}
# end: func_emit_cwrapperexe_src
# func_win32_import_lib_p ARG
# True if ARG is an import lib, as indicated by $file_magic_cmd
func_win32_import_lib_p ()
{
$debug_cmd
case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in
*import*) : ;;
*) false ;;
esac
}
# func_suncc_cstd_abi
# !!ONLY CALL THIS FOR SUN CC AFTER $compile_command IS FULLY EXPANDED!!
# Several compiler flags select an ABI that is incompatible with the
# Cstd library. Avoid specifying it if any are in CXXFLAGS.
func_suncc_cstd_abi ()
{
$debug_cmd
case " $compile_command " in
*" -compat=g "*|*\ -std=c++[0-9][0-9]\ *|*" -library=stdcxx4 "*|*" -library=stlport4 "*)
suncc_use_cstd_abi=no
;;
*)
suncc_use_cstd_abi=yes
;;
esac
}
# func_mode_link arg...
func_mode_link ()
{
$debug_cmd
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*)
# It is impossible to link a dll without this setting, and
# we shouldn't force the makefile maintainer to figure out
# what system we are compiling for in order to pass an extra
# flag for every libtool invocation.
# allow_undefined=no
# FIXME: Unfortunately, there are problems with the above when trying
# to make a dll that has undefined symbols, in which case not
# even a static library is built. For now, we need to specify
# -no-undefined on the libtool link line when we can be certain
# that all symbols are satisfied, otherwise we get a static library.
allow_undefined=yes
;;
*)
allow_undefined=yes
;;
esac
libtool_args=$nonopt
base_compile="$nonopt $@"
compile_command=$nonopt
finalize_command=$nonopt
compile_rpath=
finalize_rpath=
compile_shlibpath=
finalize_shlibpath=
convenience=
old_convenience=
deplibs=
old_deplibs=
compiler_flags=
linker_flags=
dllsearchpath=
lib_search_path=`pwd`
inst_prefix_dir=
new_inherited_linker_flags=
avoid_version=no
bindir=
dlfiles=
dlprefiles=
dlself=no
export_dynamic=no
export_symbols=
export_symbols_regex=
generated=
libobjs=
ltlibs=
module=no
no_install=no
objs=
os2dllname=
non_pic_objects=
precious_files_regex=
prefer_static_libs=no
preload=false
prev=
prevarg=
release=
rpath=
xrpath=
perm_rpath=
temp_rpath=
thread_safe=no
vinfo=
vinfo_number=no
weak_libs=
single_module=$wl-single_module
func_infer_tag $base_compile
# We need to know -static, to get the right output filenames.
for arg
do
case $arg in
-shared)
test yes != "$build_libtool_libs" \
&& func_fatal_configuration "cannot build a shared library"
build_old_libs=no
break
;;
-all-static | -static | -static-libtool-libs)
case $arg in
-all-static)
if test yes = "$build_libtool_libs" && test -z "$link_static_flag"; then
func_warning "complete static linking is impossible in this configuration"
fi
if test -n "$link_static_flag"; then
dlopen_self=$dlopen_self_static
fi
prefer_static_libs=yes
;;
-static)
if test -z "$pic_flag" && test -n "$link_static_flag"; then
dlopen_self=$dlopen_self_static
fi
prefer_static_libs=built
;;
-static-libtool-libs)
if test -z "$pic_flag" && test -n "$link_static_flag"; then
dlopen_self=$dlopen_self_static
fi
prefer_static_libs=yes
;;
esac
build_libtool_libs=no
build_old_libs=yes
break
;;
esac
done
# See if our shared archives depend on static archives.
test -n "$old_archive_from_new_cmds" && build_old_libs=yes
# Go through the arguments, transforming them on the way.
while test "$#" -gt 0; do
arg=$1
shift
func_quote_for_eval "$arg"
qarg=$func_quote_for_eval_unquoted_result
func_append libtool_args " $func_quote_for_eval_result"
# If the previous option needs an argument, assign it.
if test -n "$prev"; then
case $prev in
output)
func_append compile_command " @OUTPUT@"
func_append finalize_command " @OUTPUT@"
;;
esac
case $prev in
bindir)
bindir=$arg
prev=
continue
;;
dlfiles|dlprefiles)
$preload || {
# Add the symbol object into the linking commands.
func_append compile_command " @SYMFILE@"
func_append finalize_command " @SYMFILE@"
preload=:
}
case $arg in
*.la | *.lo) ;; # We handle these cases below.
force)
if test no = "$dlself"; then
dlself=needless
export_dynamic=yes
fi
prev=
continue
;;
self)
if test dlprefiles = "$prev"; then
dlself=yes
elif test dlfiles = "$prev" && test yes != "$dlopen_self"; then
dlself=yes
else
dlself=needless
export_dynamic=yes
fi
prev=
continue
;;
*)
if test dlfiles = "$prev"; then
func_append dlfiles " $arg"
else
func_append dlprefiles " $arg"
fi
prev=
continue
;;
esac
;;
expsyms)
export_symbols=$arg
test -f "$arg" \
|| func_fatal_error "symbol file '$arg' does not exist"
prev=
continue
;;
expsyms_regex)
export_symbols_regex=$arg
prev=
continue
;;
framework)
case $host in
*-*-darwin*)
case "$deplibs " in
*" $qarg.ltframework "*) ;;
*) func_append deplibs " $qarg.ltframework" # this is fixed later
;;
esac
;;
esac
prev=
continue
;;
inst_prefix)
inst_prefix_dir=$arg
prev=
continue
;;
mllvm)
# Clang does not use LLVM to link, so we can simply discard any
# '-mllvm $arg' options when doing the link step.
prev=
continue
;;
objectlist)
if test -f "$arg"; then
save_arg=$arg
moreargs=
for fil in `cat "$save_arg"`
do
# func_append moreargs " $fil"
arg=$fil
# A libtool-controlled object.
# Check to see that this really is a libtool object.
if func_lalib_unsafe_p "$arg"; then
pic_object=
non_pic_object=
# Read the .lo file
func_source "$arg"
if test -z "$pic_object" ||
test -z "$non_pic_object" ||
test none = "$pic_object" &&
test none = "$non_pic_object"; then
func_fatal_error "cannot find name of object for '$arg'"
fi
# Extract subdirectory from the argument.
func_dirname "$arg" "/" ""
xdir=$func_dirname_result
if test none != "$pic_object"; then
# Prepend the subdirectory the object is found in.
pic_object=$xdir$pic_object
if test dlfiles = "$prev"; then
if test yes = "$build_libtool_libs" && test yes = "$dlopen_support"; then
func_append dlfiles " $pic_object"
prev=
continue
else
# If libtool objects are unsupported, then we need to preload.
prev=dlprefiles
fi
fi
# CHECK ME: I think I busted this. -Ossama
if test dlprefiles = "$prev"; then
# Preload the old-style object.
func_append dlprefiles " $pic_object"
prev=
fi
# A PIC object.
func_append libobjs " $pic_object"
arg=$pic_object
fi
# Non-PIC object.
if test none != "$non_pic_object"; then
# Prepend the subdirectory the object is found in.
non_pic_object=$xdir$non_pic_object
# A standard non-PIC object
func_append non_pic_objects " $non_pic_object"
if test -z "$pic_object" || test none = "$pic_object"; then
arg=$non_pic_object
fi
else
# If the PIC object exists, use it instead.
# $xdir was prepended to $pic_object above.
non_pic_object=$pic_object
func_append non_pic_objects " $non_pic_object"
fi
else
# Only an error if not doing a dry-run.
if $opt_dry_run; then
# Extract subdirectory from the argument.
func_dirname "$arg" "/" ""
xdir=$func_dirname_result
func_lo2o "$arg"
pic_object=$xdir$objdir/$func_lo2o_result
non_pic_object=$xdir$func_lo2o_result
func_append libobjs " $pic_object"
func_append non_pic_objects " $non_pic_object"
else
func_fatal_error "'$arg' is not a valid libtool object"
fi
fi
done
else
func_fatal_error "link input file '$arg' does not exist"
fi
arg=$save_arg
prev=
continue
;;
os2dllname)
os2dllname=$arg
prev=
continue
;;
precious_regex)
precious_files_regex=$arg
prev=
continue
;;
release)
release=-$arg
prev=
continue
;;
rpath | xrpath)
# We need an absolute path.
case $arg in
[\\/]* | [A-Za-z]:[\\/]*) ;;
*)
func_fatal_error "only absolute run-paths are allowed"
;;
esac
if test rpath = "$prev"; then
case "$rpath " in
*" $arg "*) ;;
*) func_append rpath " $arg" ;;
esac
else
case "$xrpath " in
*" $arg "*) ;;
*) func_append xrpath " $arg" ;;
esac
fi
prev=
continue
;;
shrext)
shrext_cmds=$arg
prev=
continue
;;
weak)
func_append weak_libs " $arg"
prev=
continue
;;
xcclinker)
func_append linker_flags " $qarg"
func_append compiler_flags " $qarg"
prev=
func_append compile_command " $qarg"
func_append finalize_command " $qarg"
continue
;;
xcompiler)
func_append compiler_flags " $qarg"
prev=
func_append compile_command " $qarg"
func_append finalize_command " $qarg"
continue
;;
xlinker)
func_append linker_flags " $qarg"
func_append compiler_flags " $wl$qarg"
prev=
func_append compile_command " $wl$qarg"
func_append finalize_command " $wl$qarg"
continue
;;
*)
eval "$prev=\"\$arg\""
prev=
continue
;;
esac
fi # test -n "$prev"
prevarg=$arg
case $arg in
-all-static)
if test -n "$link_static_flag"; then
# See comment for -static flag below, for more details.
func_append compile_command " $link_static_flag"
func_append finalize_command " $link_static_flag"
fi
continue
;;
-allow-undefined)
# FIXME: remove this flag sometime in the future.
func_fatal_error "'-allow-undefined' must not be used because it is the default"
;;
-avoid-version)
avoid_version=yes
continue
;;
-bindir)
prev=bindir
continue
;;
-dlopen)
prev=dlfiles
continue
;;
-dlpreopen)
prev=dlprefiles
continue
;;
-export-dynamic)
export_dynamic=yes
continue
;;
-export-symbols | -export-symbols-regex)
if test -n "$export_symbols" || test -n "$export_symbols_regex"; then
func_fatal_error "more than one -exported-symbols argument is not allowed"
fi
if test X-export-symbols = "X$arg"; then
prev=expsyms
else
prev=expsyms_regex
fi
continue
;;
-framework)
prev=framework
continue
;;
-inst-prefix-dir)
prev=inst_prefix
continue
;;
# The native IRIX linker understands -LANG:*, -LIST:* and -LNO:*
# so, if we see these flags be careful not to treat them like -L
-L[A-Z][A-Z]*:*)
case $with_gcc/$host in
no/*-*-irix* | /*-*-irix*)
func_append compile_command " $arg"
func_append finalize_command " $arg"
;;
esac
continue
;;
-L*)
func_stripname "-L" '' "$arg"
if test -z "$func_stripname_result"; then
if test "$#" -gt 0; then
func_fatal_error "require no space between '-L' and '$1'"
else
func_fatal_error "need path for '-L' option"
fi
fi
func_resolve_sysroot "$func_stripname_result"
dir=$func_resolve_sysroot_result
# We need an absolute path.
case $dir in
[\\/]* | [A-Za-z]:[\\/]*) ;;
*)
absdir=`cd "$dir" && pwd`
test -z "$absdir" && \
func_fatal_error "cannot determine absolute directory name of '$dir'"
dir=$absdir
;;
esac
case "$deplibs " in
*" -L$dir "* | *" $arg "*)
# Will only happen for absolute or sysroot arguments
;;
*)
# Preserve sysroot, but never include relative directories
case $dir in
[\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;;
*) func_append deplibs " -L$dir" ;;
esac
func_append lib_search_path " $dir"
;;
esac
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*)
testbindir=`$ECHO "$dir" | $SED 's*/lib$*/bin*'`
case :$dllsearchpath: in
*":$dir:"*) ;;
::) dllsearchpath=$dir;;
*) func_append dllsearchpath ":$dir";;
esac
case :$dllsearchpath: in
*":$testbindir:"*) ;;
::) dllsearchpath=$testbindir;;
*) func_append dllsearchpath ":$testbindir";;
esac
;;
esac
continue
;;
-l*)
if test X-lc = "X$arg" || test X-lm = "X$arg"; then
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-beos* | *-cegcc* | *-*-haiku*)
# These systems don't actually have a C or math library (as such)
continue
;;
*-*-os2*)
# These systems don't actually have a C library (as such)
test X-lc = "X$arg" && continue
;;
*-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-bitrig*)
# Do not include libc due to us having libc/libc_r.
test X-lc = "X$arg" && continue
;;
*-*-rhapsody* | *-*-darwin1.[012])
# Rhapsody C and math libraries are in the System framework
func_append deplibs " System.ltframework"
continue
;;
*-*-sco3.2v5* | *-*-sco5v6*)
# Causes problems with __ctype
test X-lc = "X$arg" && continue
;;
*-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*)
# Compiler inserts libc in the correct place for threads to work
test X-lc = "X$arg" && continue
;;
esac
elif test X-lc_r = "X$arg"; then
case $host in
*-*-openbsd* | *-*-freebsd* | *-*-dragonfly* | *-*-bitrig*)
# Do not include libc_r directly, use -pthread flag.
continue
;;
esac
fi
func_append deplibs " $arg"
continue
;;
-mllvm)
prev=mllvm
continue
;;
-module)
module=yes
continue
;;
# Tru64 UNIX uses -model [arg] to determine the layout of C++
# classes, name mangling, and exception handling.
# Darwin uses the -arch flag to determine output architecture.
-model|-arch|-isysroot|--sysroot)
func_append compiler_flags " $arg"
func_append compile_command " $arg"
func_append finalize_command " $arg"
prev=xcompiler
continue
;;
-mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \
|-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*)
func_append compiler_flags " $arg"
func_append compile_command " $arg"
func_append finalize_command " $arg"
case "$new_inherited_linker_flags " in
*" $arg "*) ;;
* ) func_append new_inherited_linker_flags " $arg" ;;
esac
continue
;;
-multi_module)
single_module=$wl-multi_module
continue
;;
-no-fast-install)
fast_install=no
continue
;;
-no-install)
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-darwin* | *-cegcc*)
# The PATH hackery in wrapper scripts is required on Windows
# and Darwin in order for the loader to find any dlls it needs.
func_warning "'-no-install' is ignored for $host"
func_warning "assuming '-no-fast-install' instead"
fast_install=no
;;
*) no_install=yes ;;
esac
continue
;;
-no-undefined)
allow_undefined=no
continue
;;
-objectlist)
prev=objectlist
continue
;;
-os2dllname)
prev=os2dllname
continue
;;
-o) prev=output ;;
-precious-files-regex)
prev=precious_regex
continue
;;
-release)
prev=release
continue
;;
-rpath)
prev=rpath
continue
;;
-R)
prev=xrpath
continue
;;
-R*)
func_stripname '-R' '' "$arg"
dir=$func_stripname_result
# We need an absolute path.
case $dir in
[\\/]* | [A-Za-z]:[\\/]*) ;;
=*)
func_stripname '=' '' "$dir"
dir=$lt_sysroot$func_stripname_result
;;
*)
func_fatal_error "only absolute run-paths are allowed"
;;
esac
case "$xrpath " in
*" $dir "*) ;;
*) func_append xrpath " $dir" ;;
esac
continue
;;
-shared)
# The effects of -shared are defined in a previous loop.
continue
;;
-shrext)
prev=shrext
continue
;;
-static | -static-libtool-libs)
# The effects of -static are defined in a previous loop.
# We used to do the same as -all-static on platforms that
# didn't have a PIC flag, but the assumption that the effects
# would be equivalent was wrong. It would break on at least
# Digital Unix and AIX.
continue
;;
-thread-safe)
thread_safe=yes
continue
;;
-version-info)
prev=vinfo
continue
;;
-version-number)
prev=vinfo
vinfo_number=yes
continue
;;
-weak)
prev=weak
continue
;;
-Wc,*)
func_stripname '-Wc,' '' "$arg"
args=$func_stripname_result
arg=
save_ifs=$IFS; IFS=,
for flag in $args; do
IFS=$save_ifs
func_quote_for_eval "$flag"
func_append arg " $func_quote_for_eval_result"
func_append compiler_flags " $func_quote_for_eval_result"
done
IFS=$save_ifs
func_stripname ' ' '' "$arg"
arg=$func_stripname_result
;;
-Wl,*)
func_stripname '-Wl,' '' "$arg"
args=$func_stripname_result
arg=
save_ifs=$IFS; IFS=,
for flag in $args; do
IFS=$save_ifs
func_quote_for_eval "$flag"
func_append arg " $wl$func_quote_for_eval_result"
func_append compiler_flags " $wl$func_quote_for_eval_result"
func_append linker_flags " $func_quote_for_eval_result"
done
IFS=$save_ifs
func_stripname ' ' '' "$arg"
arg=$func_stripname_result
;;
-Xcompiler)
prev=xcompiler
continue
;;
-Xlinker)
prev=xlinker
continue
;;
-XCClinker)
prev=xcclinker
continue
;;
# -msg_* for osf cc
-msg_*)
func_quote_for_eval "$arg"
arg=$func_quote_for_eval_result
;;
# Flags to be passed through unchanged, with rationale:
# -64, -mips[0-9] enable 64-bit mode for the SGI compiler
# -r[0-9][0-9]* specify processor for the SGI compiler
# -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler
# +DA*, +DD* enable 64-bit mode for the HP compiler
# -q* compiler args for the IBM compiler
# -m*, -t[45]*, -txscale* architecture-specific flags for GCC
# -F/path path to uninstalled frameworks, gcc on darwin
# -p, -pg, --coverage, -fprofile-* profiling flags for GCC
# -fstack-protector* stack protector flags for GCC
# @file GCC response files
# -tp=* Portland pgcc target processor selection
# --sysroot=* for sysroot support
# -O*, -g*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization
# -specs=* GCC specs files
# -stdlib=* select c++ std lib with clang
# -fsanitize=* Clang/GCC memory and address sanitizer
-64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \
-t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \
-O*|-g*|-flto*|-fwhopr*|-fuse-linker-plugin|-fstack-protector*|-stdlib=*| \
-specs=*|-fsanitize=*)
func_quote_for_eval "$arg"
arg=$func_quote_for_eval_result
func_append compile_command " $arg"
func_append finalize_command " $arg"
func_append compiler_flags " $arg"
continue
;;
-Z*)
if test os2 = "`expr $host : '.*\(os2\)'`"; then
# OS/2 uses -Zxxx to specify OS/2-specific options
compiler_flags="$compiler_flags $arg"
func_append compile_command " $arg"
func_append finalize_command " $arg"
case $arg in
-Zlinker | -Zstack)
prev=xcompiler
;;
esac
continue
else
# Otherwise treat like 'Some other compiler flag' below
func_quote_for_eval "$arg"
arg=$func_quote_for_eval_result
fi
;;
# Some other compiler flag.
-* | +*)
func_quote_for_eval "$arg"
arg=$func_quote_for_eval_result
;;
*.$objext)
# A standard object.
func_append objs " $arg"
;;
*.lo)
# A libtool-controlled object.
# Check to see that this really is a libtool object.
if func_lalib_unsafe_p "$arg"; then
pic_object=
non_pic_object=
# Read the .lo file
func_source "$arg"
if test -z "$pic_object" ||
test -z "$non_pic_object" ||
test none = "$pic_object" &&
test none = "$non_pic_object"; then
func_fatal_error "cannot find name of object for '$arg'"
fi
# Extract subdirectory from the argument.
func_dirname "$arg" "/" ""
xdir=$func_dirname_result
test none = "$pic_object" || {
# Prepend the subdirectory the object is found in.
pic_object=$xdir$pic_object
if test dlfiles = "$prev"; then
if test yes = "$build_libtool_libs" && test yes = "$dlopen_support"; then
func_append dlfiles " $pic_object"
prev=
continue
else
# If libtool objects are unsupported, then we need to preload.
prev=dlprefiles
fi
fi
# CHECK ME: I think I busted this. -Ossama
if test dlprefiles = "$prev"; then
# Preload the old-style object.
func_append dlprefiles " $pic_object"
prev=
fi
# A PIC object.
func_append libobjs " $pic_object"
arg=$pic_object
}
# Non-PIC object.
if test none != "$non_pic_object"; then
# Prepend the subdirectory the object is found in.
non_pic_object=$xdir$non_pic_object
# A standard non-PIC object
func_append non_pic_objects " $non_pic_object"
if test -z "$pic_object" || test none = "$pic_object"; then
arg=$non_pic_object
fi
else
# If the PIC object exists, use it instead.
# $xdir was prepended to $pic_object above.
non_pic_object=$pic_object
func_append non_pic_objects " $non_pic_object"
fi
else
# Only an error if not doing a dry-run.
if $opt_dry_run; then
# Extract subdirectory from the argument.
func_dirname "$arg" "/" ""
xdir=$func_dirname_result
func_lo2o "$arg"
pic_object=$xdir$objdir/$func_lo2o_result
non_pic_object=$xdir$func_lo2o_result
func_append libobjs " $pic_object"
func_append non_pic_objects " $non_pic_object"
else
func_fatal_error "'$arg' is not a valid libtool object"
fi
fi
;;
*.$libext)
# An archive.
func_append deplibs " $arg"
func_append old_deplibs " $arg"
continue
;;
*.la)
# A libtool-controlled library.
func_resolve_sysroot "$arg"
if test dlfiles = "$prev"; then
# This library was specified with -dlopen.
func_append dlfiles " $func_resolve_sysroot_result"
prev=
elif test dlprefiles = "$prev"; then
# The library was specified with -dlpreopen.
func_append dlprefiles " $func_resolve_sysroot_result"
prev=
else
func_append deplibs " $func_resolve_sysroot_result"
fi
continue
;;
# Some other compiler argument.
*)
# Unknown arguments in both finalize_command and compile_command need
# to be aesthetically quoted because they are evaled later.
func_quote_for_eval "$arg"
arg=$func_quote_for_eval_result
;;
esac # arg
# Now actually substitute the argument into the commands.
if test -n "$arg"; then
func_append compile_command " $arg"
func_append finalize_command " $arg"
fi
done # argument parsing loop
test -n "$prev" && \
func_fatal_help "the '$prevarg' option requires an argument"
if test yes = "$export_dynamic" && test -n "$export_dynamic_flag_spec"; then
eval arg=\"$export_dynamic_flag_spec\"
func_append compile_command " $arg"
func_append finalize_command " $arg"
fi
oldlibs=
# calculate the name of the file, without its directory
func_basename "$output"
outputname=$func_basename_result
libobjs_save=$libobjs
if test -n "$shlibpath_var"; then
# get the directories listed in $shlibpath_var
eval shlib_search_path=\`\$ECHO \"\$$shlibpath_var\" \| \$SED \'s/:/ /g\'\`
else
shlib_search_path=
fi
eval sys_lib_search_path=\"$sys_lib_search_path_spec\"
eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\"
# Definition is injected by LT_CONFIG during libtool generation.
func_munge_path_list sys_lib_dlsearch_path "$LT_SYS_LIBRARY_PATH"
func_dirname "$output" "/" ""
output_objdir=$func_dirname_result$objdir
func_to_tool_file "$output_objdir/"
tool_output_objdir=$func_to_tool_file_result
# Create the object directory.
func_mkdir_p "$output_objdir"
# Determine the type of output
case $output in
"")
func_fatal_help "you must specify an output file"
;;
*.$libext) linkmode=oldlib ;;
*.lo | *.$objext) linkmode=obj ;;
*.la) linkmode=lib ;;
*) linkmode=prog ;; # Anything else should be a program.
esac
specialdeplibs=
libs=
# Find all interdependent deplibs by searching for libraries
# that are linked more than once (e.g. -la -lb -la)
for deplib in $deplibs; do
if $opt_preserve_dup_deps; then
case "$libs " in
*" $deplib "*) func_append specialdeplibs " $deplib" ;;
esac
fi
func_append libs " $deplib"
done
if test lib = "$linkmode"; then
libs="$predeps $libs $compiler_lib_search_path $postdeps"
# Compute libraries that are listed more than once in $predeps
# $postdeps and mark them as special (i.e., whose duplicates are
# not to be eliminated).
pre_post_deps=
if $opt_duplicate_compiler_generated_deps; then
for pre_post_dep in $predeps $postdeps; do
case "$pre_post_deps " in
*" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;;
esac
func_append pre_post_deps " $pre_post_dep"
done
fi
pre_post_deps=
fi
deplibs=
newdependency_libs=
newlib_search_path=
need_relink=no # whether we're linking any uninstalled libtool libraries
notinst_deplibs= # not-installed libtool libraries
notinst_path= # paths that contain not-installed libtool libraries
case $linkmode in
lib)
passes="conv dlpreopen link"
for file in $dlfiles $dlprefiles; do
case $file in
*.la) ;;
*)
func_fatal_help "libraries can '-dlopen' only libtool libraries: $file"
;;
esac
done
;;
prog)
compile_deplibs=
finalize_deplibs=
alldeplibs=false
newdlfiles=
newdlprefiles=
passes="conv scan dlopen dlpreopen link"
;;
*) passes="conv"
;;
esac
for pass in $passes; do
# The preopen pass in lib mode reverses $deplibs; put it back here
# so that -L comes before libs that need it for instance...
if test lib,link = "$linkmode,$pass"; then
## FIXME: Find the place where the list is rebuilt in the wrong
## order, and fix it there properly
tmp_deplibs=
for deplib in $deplibs; do
tmp_deplibs="$deplib $tmp_deplibs"
done
deplibs=$tmp_deplibs
fi
if test lib,link = "$linkmode,$pass" ||
test prog,scan = "$linkmode,$pass"; then
libs=$deplibs
deplibs=
fi
if test prog = "$linkmode"; then
case $pass in
dlopen) libs=$dlfiles ;;
dlpreopen) libs=$dlprefiles ;;
link)
libs="$deplibs %DEPLIBS%"
test "X$link_all_deplibs" != Xno && libs="$libs $dependency_libs"
;;
esac
fi
if test lib,dlpreopen = "$linkmode,$pass"; then
# Collect and forward deplibs of preopened libtool libs
for lib in $dlprefiles; do
# Ignore non-libtool-libs
dependency_libs=
func_resolve_sysroot "$lib"
case $lib in
*.la) func_source "$func_resolve_sysroot_result" ;;
esac
# Collect preopened libtool deplibs, except any this library
# has declared as weak libs
for deplib in $dependency_libs; do
func_basename "$deplib"
deplib_base=$func_basename_result
case " $weak_libs " in
*" $deplib_base "*) ;;
*) func_append deplibs " $deplib" ;;
esac
done
done
libs=$dlprefiles
fi
if test dlopen = "$pass"; then
# Collect dlpreopened libraries
save_deplibs=$deplibs
deplibs=
fi
for deplib in $libs; do
lib=
found=false
case $deplib in
-mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \
|-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*)
if test prog,link = "$linkmode,$pass"; then
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
else
func_append compiler_flags " $deplib"
if test lib = "$linkmode"; then
case "$new_inherited_linker_flags " in
*" $deplib "*) ;;
* ) func_append new_inherited_linker_flags " $deplib" ;;
esac
fi
fi
continue
;;
-l*)
if test lib != "$linkmode" && test prog != "$linkmode"; then
func_warning "'-l' is ignored for archives/objects"
continue
fi
func_stripname '-l' '' "$deplib"
name=$func_stripname_result
if test lib = "$linkmode"; then
searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path"
else
searchdirs="$newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path"
fi
for searchdir in $searchdirs; do
for search_ext in .la $std_shrext .so .a; do
# Search the libtool library
lib=$searchdir/lib$name$search_ext
if test -f "$lib"; then
if test .la = "$search_ext"; then
found=:
else
found=false
fi
break 2
fi
done
done
if $found; then
# deplib is a libtool library
# If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib,
# We need to do some special things here, and not later.
if test yes = "$allow_libtool_libs_with_static_runtimes"; then
case " $predeps $postdeps " in
*" $deplib "*)
if func_lalib_p "$lib"; then
library_names=
old_library=
func_source "$lib"
for l in $old_library $library_names; do
ll=$l
done
if test "X$ll" = "X$old_library"; then # only static version available
found=false
func_dirname "$lib" "" "."
ladir=$func_dirname_result
lib=$ladir/$old_library
if test prog,link = "$linkmode,$pass"; then
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
else
deplibs="$deplib $deplibs"
test lib = "$linkmode" && newdependency_libs="$deplib $newdependency_libs"
fi
continue
fi
fi
;;
*) ;;
esac
fi
else
# deplib doesn't seem to be a libtool library
if test prog,link = "$linkmode,$pass"; then
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
else
deplibs="$deplib $deplibs"
test lib = "$linkmode" && newdependency_libs="$deplib $newdependency_libs"
fi
continue
fi
;; # -l
*.ltframework)
if test prog,link = "$linkmode,$pass"; then
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
else
deplibs="$deplib $deplibs"
if test lib = "$linkmode"; then
case "$new_inherited_linker_flags " in
*" $deplib "*) ;;
* ) func_append new_inherited_linker_flags " $deplib" ;;
esac
fi
fi
continue
;;
-L*)
case $linkmode in
lib)
deplibs="$deplib $deplibs"
test conv = "$pass" && continue
newdependency_libs="$deplib $newdependency_libs"
func_stripname '-L' '' "$deplib"
func_resolve_sysroot "$func_stripname_result"
func_append newlib_search_path " $func_resolve_sysroot_result"
;;
prog)
if test conv = "$pass"; then
deplibs="$deplib $deplibs"
continue
fi
if test scan = "$pass"; then
deplibs="$deplib $deplibs"
else
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
fi
func_stripname '-L' '' "$deplib"
func_resolve_sysroot "$func_stripname_result"
func_append newlib_search_path " $func_resolve_sysroot_result"
;;
*)
func_warning "'-L' is ignored for archives/objects"
;;
esac # linkmode
continue
;; # -L
-R*)
if test link = "$pass"; then
func_stripname '-R' '' "$deplib"
func_resolve_sysroot "$func_stripname_result"
dir=$func_resolve_sysroot_result
# Make sure the xrpath contains only unique directories.
case "$xrpath " in
*" $dir "*) ;;
*) func_append xrpath " $dir" ;;
esac
fi
deplibs="$deplib $deplibs"
continue
;;
*.la)
func_resolve_sysroot "$deplib"
lib=$func_resolve_sysroot_result
;;
*.$libext)
if test conv = "$pass"; then
deplibs="$deplib $deplibs"
continue
fi
case $linkmode in
lib)
# Linking convenience modules into shared libraries is allowed,
# but linking other static libraries is non-portable.
case " $dlpreconveniencelibs " in
*" $deplib "*) ;;
*)
valid_a_lib=false
case $deplibs_check_method in
match_pattern*)
set dummy $deplibs_check_method; shift
match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"`
if eval "\$ECHO \"$deplib\"" 2>/dev/null | $SED 10q \
| $EGREP "$match_pattern_regex" > /dev/null; then
valid_a_lib=:
fi
;;
pass_all)
valid_a_lib=:
;;
esac
if $valid_a_lib; then
echo
$ECHO "*** Warning: Linking the shared library $output against the"
$ECHO "*** static library $deplib is not portable!"
deplibs="$deplib $deplibs"
else
echo
$ECHO "*** Warning: Trying to link with static lib archive $deplib."
echo "*** I have the capability to make that library automatically link in when"
echo "*** you link to this library. But I can only do this if you have a"
echo "*** shared version of the library, which you do not appear to have"
echo "*** because the file extensions .$libext of this argument makes me believe"
echo "*** that it is just a static archive that I should not use here."
fi
;;
esac
continue
;;
prog)
if test link != "$pass"; then
deplibs="$deplib $deplibs"
else
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
fi
continue
;;
esac # linkmode
;; # *.$libext
*.lo | *.$objext)
if test conv = "$pass"; then
deplibs="$deplib $deplibs"
elif test prog = "$linkmode"; then
if test dlpreopen = "$pass" || test yes != "$dlopen_support" || test no = "$build_libtool_libs"; then
# If there is no dlopen support or we're linking statically,
# we need to preload.
func_append newdlprefiles " $deplib"
compile_deplibs="$deplib $compile_deplibs"
finalize_deplibs="$deplib $finalize_deplibs"
else
func_append newdlfiles " $deplib"
fi
fi
continue
;;
%DEPLIBS%)
alldeplibs=:
continue
;;
esac # case $deplib
$found || test -f "$lib" \
|| func_fatal_error "cannot find the library '$lib' or unhandled argument '$deplib'"
# Check to see that this really is a libtool archive.
func_lalib_unsafe_p "$lib" \
|| func_fatal_error "'$lib' is not a valid libtool archive"
func_dirname "$lib" "" "."
ladir=$func_dirname_result
dlname=
dlopen=
dlpreopen=
libdir=
library_names=
old_library=
inherited_linker_flags=
# If the library was installed with an old release of libtool,
# it will not redefine variables installed, or shouldnotlink
installed=yes
shouldnotlink=no
avoidtemprpath=
# Read the .la file
func_source "$lib"
# Convert "-framework foo" to "foo.ltframework"
if test -n "$inherited_linker_flags"; then
tmp_inherited_linker_flags=`$ECHO "$inherited_linker_flags" | $SED 's/-framework \([^ $]*\)/\1.ltframework/g'`
for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do
case " $new_inherited_linker_flags " in
*" $tmp_inherited_linker_flag "*) ;;
*) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";;
esac
done
fi
dependency_libs=`$ECHO " $dependency_libs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
if test lib,link = "$linkmode,$pass" ||
test prog,scan = "$linkmode,$pass" ||
{ test prog != "$linkmode" && test lib != "$linkmode"; }; then
test -n "$dlopen" && func_append dlfiles " $dlopen"
test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen"
fi
if test conv = "$pass"; then
# Only check for convenience libraries
deplibs="$lib $deplibs"
if test -z "$libdir"; then
if test -z "$old_library"; then
func_fatal_error "cannot find name of link library for '$lib'"
fi
# It is a libtool convenience library, so add in its objects.
func_append convenience " $ladir/$objdir/$old_library"
func_append old_convenience " $ladir/$objdir/$old_library"
tmp_libs=
for deplib in $dependency_libs; do
deplibs="$deplib $deplibs"
if $opt_preserve_dup_deps; then
case "$tmp_libs " in
*" $deplib "*) func_append specialdeplibs " $deplib" ;;
esac
fi
func_append tmp_libs " $deplib"
done
elif test prog != "$linkmode" && test lib != "$linkmode"; then
func_fatal_error "'$lib' is not a convenience library"
fi
continue
fi # $pass = conv
# Get the name of the library we link against.
linklib=
if test -n "$old_library" &&
{ test yes = "$prefer_static_libs" ||
test built,no = "$prefer_static_libs,$installed"; }; then
linklib=$old_library
else
for l in $old_library $library_names; do
linklib=$l
done
fi
if test -z "$linklib"; then
func_fatal_error "cannot find name of link library for '$lib'"
fi
# This library was specified with -dlopen.
if test dlopen = "$pass"; then
test -z "$libdir" \
&& func_fatal_error "cannot -dlopen a convenience library: '$lib'"
if test -z "$dlname" ||
test yes != "$dlopen_support" ||
test no = "$build_libtool_libs"
then
# If there is no dlname, no dlopen support or we're linking
# statically, we need to preload. We also need to preload any
# dependent libraries so libltdl's deplib preloader doesn't
# bomb out in the load deplibs phase.
func_append dlprefiles " $lib $dependency_libs"
else
func_append newdlfiles " $lib"
fi
continue
fi # $pass = dlopen
# We need an absolute path.
case $ladir in
[\\/]* | [A-Za-z]:[\\/]*) abs_ladir=$ladir ;;
*)
abs_ladir=`cd "$ladir" && pwd`
if test -z "$abs_ladir"; then
func_warning "cannot determine absolute directory name of '$ladir'"
func_warning "passing it literally to the linker, although it might fail"
abs_ladir=$ladir
fi
;;
esac
func_basename "$lib"
laname=$func_basename_result
# Find the relevant object directory and library name.
if test yes = "$installed"; then
if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
func_warning "library '$lib' was moved."
dir=$ladir
absdir=$abs_ladir
libdir=$abs_ladir
else
dir=$lt_sysroot$libdir
absdir=$lt_sysroot$libdir
fi
test yes = "$hardcode_automatic" && avoidtemprpath=yes
else
if test ! -f "$ladir/$objdir/$linklib" && test -f "$abs_ladir/$linklib"; then
dir=$ladir
absdir=$abs_ladir
# Remove this search path later
func_append notinst_path " $abs_ladir"
else
dir=$ladir/$objdir
absdir=$abs_ladir/$objdir
# Remove this search path later
func_append notinst_path " $abs_ladir"
fi
fi # $installed = yes
func_stripname 'lib' '.la' "$laname"
name=$func_stripname_result
# This library was specified with -dlpreopen.
if test dlpreopen = "$pass"; then
if test -z "$libdir" && test prog = "$linkmode"; then
func_fatal_error "only libraries may -dlpreopen a convenience library: '$lib'"
fi
case $host in
# special handling for platforms with PE-DLLs.
*cygwin* | *mingw* | *cegcc* )
# Linker will automatically link against shared library if both
# static and shared are present. Therefore, ensure we extract
# symbols from the import library if a shared library is present
# (otherwise, the dlopen module name will be incorrect). We do
# this by putting the import library name into $newdlprefiles.
# We recover the dlopen module name by 'saving' the la file
# name in a special purpose variable, and (later) extracting the
# dlname from the la file.
if test -n "$dlname"; then
func_tr_sh "$dir/$linklib"
eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname"
func_append newdlprefiles " $dir/$linklib"
else
func_append newdlprefiles " $dir/$old_library"
# Keep a list of preopened convenience libraries to check
# that they are being used correctly in the link pass.
test -z "$libdir" && \
func_append dlpreconveniencelibs " $dir/$old_library"
fi
;;
* )
# Prefer using a static library (so that no silly _DYNAMIC symbols
# are required to link).
if test -n "$old_library"; then
func_append newdlprefiles " $dir/$old_library"
# Keep a list of preopened convenience libraries to check
# that they are being used correctly in the link pass.
test -z "$libdir" && \
func_append dlpreconveniencelibs " $dir/$old_library"
# Otherwise, use the dlname, so that lt_dlopen finds it.
elif test -n "$dlname"; then
func_append newdlprefiles " $dir/$dlname"
else
func_append newdlprefiles " $dir/$linklib"
fi
;;
esac
fi # $pass = dlpreopen
if test -z "$libdir"; then
# Link the convenience library
if test lib = "$linkmode"; then
deplibs="$dir/$old_library $deplibs"
elif test prog,link = "$linkmode,$pass"; then
compile_deplibs="$dir/$old_library $compile_deplibs"
finalize_deplibs="$dir/$old_library $finalize_deplibs"
else
deplibs="$lib $deplibs" # used for prog,scan pass
fi
continue
fi
if test prog = "$linkmode" && test link != "$pass"; then
func_append newlib_search_path " $ladir"
deplibs="$lib $deplibs"
linkalldeplibs=false
if test no != "$link_all_deplibs" || test -z "$library_names" ||
test no = "$build_libtool_libs"; then
linkalldeplibs=:
fi
tmp_libs=
for deplib in $dependency_libs; do
case $deplib in
-L*) func_stripname '-L' '' "$deplib"
func_resolve_sysroot "$func_stripname_result"
func_append newlib_search_path " $func_resolve_sysroot_result"
;;
esac
# Need to link against all dependency_libs?
if $linkalldeplibs; then
deplibs="$deplib $deplibs"
else
# Need to hardcode shared library paths
# or/and link against static libraries
newdependency_libs="$deplib $newdependency_libs"
fi
if $opt_preserve_dup_deps; then
case "$tmp_libs " in
*" $deplib "*) func_append specialdeplibs " $deplib" ;;
esac
fi
func_append tmp_libs " $deplib"
done # for deplib
continue
fi # $linkmode = prog...
if test prog,link = "$linkmode,$pass"; then
if test -n "$library_names" &&
{ { test no = "$prefer_static_libs" ||
test built,yes = "$prefer_static_libs,$installed"; } ||
test -z "$old_library"; }; then
# We need to hardcode the library path
if test -n "$shlibpath_var" && test -z "$avoidtemprpath"; then
# Make sure the rpath contains only unique directories.
case $temp_rpath: in
*"$absdir:"*) ;;
*) func_append temp_rpath "$absdir:" ;;
esac
fi
# Hardcode the library path.
# Skip directories that are in the system default run-time
# search path.
case " $sys_lib_dlsearch_path " in
*" $absdir "*) ;;
*)
case "$compile_rpath " in
*" $absdir "*) ;;
*) func_append compile_rpath " $absdir" ;;
esac
;;
esac
case " $sys_lib_dlsearch_path " in
*" $libdir "*) ;;
*)
case "$finalize_rpath " in
*" $libdir "*) ;;
*) func_append finalize_rpath " $libdir" ;;
esac
;;
esac
fi # $linkmode,$pass = prog,link...
if $alldeplibs &&
{ test pass_all = "$deplibs_check_method" ||
{ test yes = "$build_libtool_libs" &&
test -n "$library_names"; }; }; then
# We only need to search for static libraries
continue
fi
fi
link_static=no # Whether the deplib will be linked statically
use_static_libs=$prefer_static_libs
if test built = "$use_static_libs" && test yes = "$installed"; then
use_static_libs=no
fi
if test -n "$library_names" &&
{ test no = "$use_static_libs" || test -z "$old_library"; }; then
case $host in
*cygwin* | *mingw* | *cegcc* | *os2*)
# No point in relinking DLLs because paths are not encoded
func_append notinst_deplibs " $lib"
need_relink=no
;;
*)
if test no = "$installed"; then
func_append notinst_deplibs " $lib"
need_relink=yes
fi
;;
esac
# This is a shared library
# Warn about portability, can't link against -module's on some
# systems (darwin). Don't bleat about dlopened modules though!
dlopenmodule=
for dlpremoduletest in $dlprefiles; do
if test "X$dlpremoduletest" = "X$lib"; then
dlopenmodule=$dlpremoduletest
break
fi
done
if test -z "$dlopenmodule" && test yes = "$shouldnotlink" && test link = "$pass"; then
echo
if test prog = "$linkmode"; then
$ECHO "*** Warning: Linking the executable $output against the loadable module"
else
$ECHO "*** Warning: Linking the shared library $output against the loadable module"
fi
$ECHO "*** $linklib is not portable!"
fi
if test lib = "$linkmode" &&
test yes = "$hardcode_into_libs"; then
# Hardcode the library path.
# Skip directories that are in the system default run-time
# search path.
case " $sys_lib_dlsearch_path " in
*" $absdir "*) ;;
*)
case "$compile_rpath " in
*" $absdir "*) ;;
*) func_append compile_rpath " $absdir" ;;
esac
;;
esac
case " $sys_lib_dlsearch_path " in
*" $libdir "*) ;;
*)
case "$finalize_rpath " in
*" $libdir "*) ;;
*) func_append finalize_rpath " $libdir" ;;
esac
;;
esac
fi
if test -n "$old_archive_from_expsyms_cmds"; then
# figure out the soname
set dummy $library_names
shift
realname=$1
shift
libname=`eval "\\$ECHO \"$libname_spec\""`
# use dlname if we got it. it's perfectly good, no?
if test -n "$dlname"; then
soname=$dlname
elif test -n "$soname_spec"; then
# bleh windows
case $host in
*cygwin* | mingw* | *cegcc* | *os2*)
func_arith $current - $age
major=$func_arith_result
versuffix=-$major
;;
esac
eval soname=\"$soname_spec\"
else
soname=$realname
fi
# Make a new name for the extract_expsyms_cmds to use
soroot=$soname
func_basename "$soroot"
soname=$func_basename_result
func_stripname 'lib' '.dll' "$soname"
newlib=libimp-$func_stripname_result.a
# If the library has no export list, then create one now
if test -f "$output_objdir/$soname-def"; then :
else
func_verbose "extracting exported symbol list from '$soname'"
func_execute_cmds "$extract_expsyms_cmds" 'exit $?'
fi
# Create $newlib
if test -f "$output_objdir/$newlib"; then :; else
func_verbose "generating import library for '$soname'"
func_execute_cmds "$old_archive_from_expsyms_cmds" 'exit $?'
fi
# make sure the library variables are pointing to the new library
dir=$output_objdir
linklib=$newlib
fi # test -n "$old_archive_from_expsyms_cmds"
if test prog = "$linkmode" || test relink != "$opt_mode"; then
add_shlibpath=
add_dir=
add=
lib_linked=yes
case $hardcode_action in
immediate | unsupported)
if test no = "$hardcode_direct"; then
add=$dir/$linklib
case $host in
*-*-sco3.2v5.0.[024]*) add_dir=-L$dir ;;
*-*-sysv4*uw2*) add_dir=-L$dir ;;
*-*-sysv5OpenUNIX* | *-*-sysv5UnixWare7.[01].[10]* | \
*-*-unixware7*) add_dir=-L$dir ;;
*-*-darwin* )
# if the lib is a (non-dlopened) module then we cannot
# link against it, someone is ignoring the earlier warnings
if /usr/bin/file -L $add 2> /dev/null |
$GREP ": [^:]* bundle" >/dev/null; then
if test "X$dlopenmodule" != "X$lib"; then
$ECHO "*** Warning: lib $linklib is a module, not a shared library"
if test -z "$old_library"; then
echo
echo "*** And there doesn't seem to be a static archive available"
echo "*** The link will probably fail, sorry"
else
add=$dir/$old_library
fi
elif test -n "$old_library"; then
add=$dir/$old_library
fi
fi
esac
elif test no = "$hardcode_minus_L"; then
case $host in
*-*-sunos*) add_shlibpath=$dir ;;
esac
add_dir=-L$dir
add=-l$name
elif test no = "$hardcode_shlibpath_var"; then
add_shlibpath=$dir
add=-l$name
else
lib_linked=no
fi
;;
relink)
if test yes = "$hardcode_direct" &&
test no = "$hardcode_direct_absolute"; then
add=$dir/$linklib
elif test yes = "$hardcode_minus_L"; then
add_dir=-L$absdir
# Try looking first in the location we're being installed to.
if test -n "$inst_prefix_dir"; then
case $libdir in
[\\/]*)
func_append add_dir " -L$inst_prefix_dir$libdir"
;;
esac
fi
add=-l$name
elif test yes = "$hardcode_shlibpath_var"; then
add_shlibpath=$dir
add=-l$name
else
lib_linked=no
fi
;;
*) lib_linked=no ;;
esac
if test yes != "$lib_linked"; then
func_fatal_configuration "unsupported hardcode properties"
fi
if test -n "$add_shlibpath"; then
case :$compile_shlibpath: in
*":$add_shlibpath:"*) ;;
*) func_append compile_shlibpath "$add_shlibpath:" ;;
esac
fi
if test prog = "$linkmode"; then
test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs"
test -n "$add" && compile_deplibs="$add $compile_deplibs"
else
test -n "$add_dir" && deplibs="$add_dir $deplibs"
test -n "$add" && deplibs="$add $deplibs"
if test yes != "$hardcode_direct" &&
test yes != "$hardcode_minus_L" &&
test yes = "$hardcode_shlibpath_var"; then
case :$finalize_shlibpath: in
*":$libdir:"*) ;;
*) func_append finalize_shlibpath "$libdir:" ;;
esac
fi
fi
fi
if test prog = "$linkmode" || test relink = "$opt_mode"; then
add_shlibpath=
add_dir=
add=
# Finalize command for both is simple: just hardcode it.
if test yes = "$hardcode_direct" &&
test no = "$hardcode_direct_absolute"; then
add=$libdir/$linklib
elif test yes = "$hardcode_minus_L"; then
add_dir=-L$libdir
add=-l$name
elif test yes = "$hardcode_shlibpath_var"; then
case :$finalize_shlibpath: in
*":$libdir:"*) ;;
*) func_append finalize_shlibpath "$libdir:" ;;
esac
add=-l$name
elif test yes = "$hardcode_automatic"; then
if test -n "$inst_prefix_dir" &&
test -f "$inst_prefix_dir$libdir/$linklib"; then
add=$inst_prefix_dir$libdir/$linklib
else
add=$libdir/$linklib
fi
else
# We cannot seem to hardcode it, guess we'll fake it.
add_dir=-L$libdir
# Try looking first in the location we're being installed to.
if test -n "$inst_prefix_dir"; then
case $libdir in
[\\/]*)
func_append add_dir " -L$inst_prefix_dir$libdir"
;;
esac
fi
add=-l$name
fi
if test prog = "$linkmode"; then
test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs"
test -n "$add" && finalize_deplibs="$add $finalize_deplibs"
else
test -n "$add_dir" && deplibs="$add_dir $deplibs"
test -n "$add" && deplibs="$add $deplibs"
fi
fi
elif test prog = "$linkmode"; then
# Here we assume that one of hardcode_direct or hardcode_minus_L
# is not unsupported. This is valid on all known static and
# shared platforms.
if test unsupported != "$hardcode_direct"; then
test -n "$old_library" && linklib=$old_library
compile_deplibs="$dir/$linklib $compile_deplibs"
finalize_deplibs="$dir/$linklib $finalize_deplibs"
else
compile_deplibs="-l$name -L$dir $compile_deplibs"
finalize_deplibs="-l$name -L$dir $finalize_deplibs"
fi
elif test yes = "$build_libtool_libs"; then
# Not a shared library
if test pass_all != "$deplibs_check_method"; then
# We're trying link a shared library against a static one
# but the system doesn't support it.
# Just print a warning and add the library to dependency_libs so
# that the program can be linked against the static library.
echo
$ECHO "*** Warning: This system cannot link to static lib archive $lib."
echo "*** I have the capability to make that library automatically link in when"
echo "*** you link to this library. But I can only do this if you have a"
echo "*** shared version of the library, which you do not appear to have."
if test yes = "$module"; then
echo "*** But as you try to build a module library, libtool will still create "
echo "*** a static module, that should work as long as the dlopening application"
echo "*** is linked with the -dlopen flag to resolve symbols at runtime."
if test -z "$global_symbol_pipe"; then
echo
echo "*** However, this would only work if libtool was able to extract symbol"
echo "*** lists from a program, using 'nm' or equivalent, but libtool could"
echo "*** not find such a program. So, this module is probably useless."
echo "*** 'nm' from GNU binutils and a full rebuild may help."
fi
if test no = "$build_old_libs"; then
build_libtool_libs=module
build_old_libs=yes
else
build_libtool_libs=no
fi
fi
else
deplibs="$dir/$old_library $deplibs"
link_static=yes
fi
fi # link shared/static library?
if test lib = "$linkmode"; then
if test -n "$dependency_libs" &&
{ test yes != "$hardcode_into_libs" ||
test yes = "$build_old_libs" ||
test yes = "$link_static"; }; then
# Extract -R from dependency_libs
temp_deplibs=
for libdir in $dependency_libs; do
case $libdir in
-R*) func_stripname '-R' '' "$libdir"
temp_xrpath=$func_stripname_result
case " $xrpath " in
*" $temp_xrpath "*) ;;
*) func_append xrpath " $temp_xrpath";;
esac;;
*) func_append temp_deplibs " $libdir";;
esac
done
dependency_libs=$temp_deplibs
fi
func_append newlib_search_path " $absdir"
# Link against this library
test no = "$link_static" && newdependency_libs="$abs_ladir/$laname $newdependency_libs"
# ... and its dependency_libs
tmp_libs=
for deplib in $dependency_libs; do
newdependency_libs="$deplib $newdependency_libs"
case $deplib in
-L*) func_stripname '-L' '' "$deplib"
func_resolve_sysroot "$func_stripname_result";;
*) func_resolve_sysroot "$deplib" ;;
esac
if $opt_preserve_dup_deps; then
case "$tmp_libs " in
*" $func_resolve_sysroot_result "*)
func_append specialdeplibs " $func_resolve_sysroot_result" ;;
esac
fi
func_append tmp_libs " $func_resolve_sysroot_result"
done
if test no != "$link_all_deplibs"; then
# Add the search paths of all dependency libraries
for deplib in $dependency_libs; do
path=
case $deplib in
-L*) path=$deplib ;;
*.la)
func_resolve_sysroot "$deplib"
deplib=$func_resolve_sysroot_result
func_dirname "$deplib" "" "."
dir=$func_dirname_result
# We need an absolute path.
case $dir in
[\\/]* | [A-Za-z]:[\\/]*) absdir=$dir ;;
*)
absdir=`cd "$dir" && pwd`
if test -z "$absdir"; then
func_warning "cannot determine absolute directory name of '$dir'"
absdir=$dir
fi
;;
esac
if $GREP "^installed=no" $deplib > /dev/null; then
case $host in
*-*-darwin*)
depdepl=
eval deplibrary_names=`$SED -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
if test -n "$deplibrary_names"; then
for tmp in $deplibrary_names; do
depdepl=$tmp
done
if test -f "$absdir/$objdir/$depdepl"; then
depdepl=$absdir/$objdir/$depdepl
darwin_install_name=`$OTOOL -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'`
if test -z "$darwin_install_name"; then
darwin_install_name=`$OTOOL64 -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'`
fi
func_append compiler_flags " $wl-dylib_file $wl$darwin_install_name:$depdepl"
func_append linker_flags " -dylib_file $darwin_install_name:$depdepl"
path=
fi
fi
;;
*)
path=-L$absdir/$objdir
;;
esac
else
eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
test -z "$libdir" && \
func_fatal_error "'$deplib' is not a valid libtool archive"
test "$absdir" != "$libdir" && \
func_warning "'$deplib' seems to be moved"
path=-L$absdir
fi
;;
esac
case " $deplibs " in
*" $path "*) ;;
*) deplibs="$path $deplibs" ;;
esac
done
fi # link_all_deplibs != no
fi # linkmode = lib
done # for deplib in $libs
if test link = "$pass"; then
if test prog = "$linkmode"; then
compile_deplibs="$new_inherited_linker_flags $compile_deplibs"
finalize_deplibs="$new_inherited_linker_flags $finalize_deplibs"
else
compiler_flags="$compiler_flags "`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
fi
fi
dependency_libs=$newdependency_libs
if test dlpreopen = "$pass"; then
# Link the dlpreopened libraries before other libraries
for deplib in $save_deplibs; do
deplibs="$deplib $deplibs"
done
fi
if test dlopen != "$pass"; then
test conv = "$pass" || {
# Make sure lib_search_path contains only unique directories.
lib_search_path=
for dir in $newlib_search_path; do
case "$lib_search_path " in
*" $dir "*) ;;
*) func_append lib_search_path " $dir" ;;
esac
done
newlib_search_path=
}
if test prog,link = "$linkmode,$pass"; then
vars="compile_deplibs finalize_deplibs"
else
vars=deplibs
fi
for var in $vars dependency_libs; do
# Add libraries to $var in reverse order
eval tmp_libs=\"\$$var\"
new_libs=
for deplib in $tmp_libs; do
# FIXME: Pedantically, this is the right thing to do, so
# that some nasty dependency loop isn't accidentally
# broken:
#new_libs="$deplib $new_libs"
# Pragmatically, this seems to cause very few problems in
# practice:
case $deplib in
-L*) new_libs="$deplib $new_libs" ;;
-R*) ;;
*)
# And here is the reason: when a library appears more
# than once as an explicit dependence of a library, or
# is implicitly linked in more than once by the
# compiler, it is considered special, and multiple
# occurrences thereof are not removed. Compare this
# with having the same library being listed as a
# dependency of multiple other libraries: in this case,
# we know (pedantically, we assume) the library does not
# need to be listed more than once, so we keep only the
# last copy. This is not always right, but it is rare
# enough that we require users that really mean to play
# such unportable linking tricks to link the library
# using -Wl,-lname, so that libtool does not consider it
# for duplicate removal.
case " $specialdeplibs " in
*" $deplib "*) new_libs="$deplib $new_libs" ;;
*)
case " $new_libs " in
*" $deplib "*) ;;
*) new_libs="$deplib $new_libs" ;;
esac
;;
esac
;;
esac
done
tmp_libs=
for deplib in $new_libs; do
case $deplib in
-L*)
case " $tmp_libs " in
*" $deplib "*) ;;
*) func_append tmp_libs " $deplib" ;;
esac
;;
*) func_append tmp_libs " $deplib" ;;
esac
done
eval $var=\"$tmp_libs\"
done # for var
fi
# Add Sun CC postdeps if required:
test CXX = "$tagname" && {
case $host_os in
linux*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*) # Sun C++ 5.9
func_suncc_cstd_abi
if test no != "$suncc_use_cstd_abi"; then
func_append postdeps ' -library=Cstd -library=Crun'
fi
;;
esac
;;
solaris*)
func_cc_basename "$CC"
case $func_cc_basename_result in
CC* | sunCC*)
func_suncc_cstd_abi
if test no != "$suncc_use_cstd_abi"; then
func_append postdeps ' -library=Cstd -library=Crun'
fi
;;
esac
;;
esac
}
# Last step: remove runtime libs from dependency_libs
# (they stay in deplibs)
tmp_libs=
for i in $dependency_libs; do
case " $predeps $postdeps $compiler_lib_search_path " in
*" $i "*)
i=
;;
esac
if test -n "$i"; then
func_append tmp_libs " $i"
fi
done
dependency_libs=$tmp_libs
done # for pass
if test prog = "$linkmode"; then
dlfiles=$newdlfiles
fi
if test prog = "$linkmode" || test lib = "$linkmode"; then
dlprefiles=$newdlprefiles
fi
case $linkmode in
oldlib)
if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then
func_warning "'-dlopen' is ignored for archives"
fi
case " $deplibs" in
*\ -l* | *\ -L*)
func_warning "'-l' and '-L' are ignored for archives" ;;
esac
test -n "$rpath" && \
func_warning "'-rpath' is ignored for archives"
test -n "$xrpath" && \
func_warning "'-R' is ignored for archives"
test -n "$vinfo" && \
func_warning "'-version-info/-version-number' is ignored for archives"
test -n "$release" && \
func_warning "'-release' is ignored for archives"
test -n "$export_symbols$export_symbols_regex" && \
func_warning "'-export-symbols' is ignored for archives"
# Now set the variables for building old libraries.
build_libtool_libs=no
oldlibs=$output
func_append objs "$old_deplibs"
;;
lib)
# Make sure we only generate libraries of the form 'libNAME.la'.
case $outputname in
lib*)
func_stripname 'lib' '.la' "$outputname"
name=$func_stripname_result
eval shared_ext=\"$shrext_cmds\"
eval libname=\"$libname_spec\"
;;
*)
test no = "$module" \
&& func_fatal_help "libtool library '$output' must begin with 'lib'"
if test no != "$need_lib_prefix"; then
# Add the "lib" prefix for modules if required
func_stripname '' '.la' "$outputname"
name=$func_stripname_result
eval shared_ext=\"$shrext_cmds\"
eval libname=\"$libname_spec\"
else
func_stripname '' '.la' "$outputname"
libname=$func_stripname_result
fi
;;
esac
if test -n "$objs"; then
if test pass_all != "$deplibs_check_method"; then
func_fatal_error "cannot build libtool library '$output' from non-libtool objects on this host:$objs"
else
echo
$ECHO "*** Warning: Linking the shared library $output against the non-libtool"
$ECHO "*** objects $objs is not portable!"
func_append libobjs " $objs"
fi
fi
test no = "$dlself" \
|| func_warning "'-dlopen self' is ignored for libtool libraries"
set dummy $rpath
shift
test 1 -lt "$#" \
&& func_warning "ignoring multiple '-rpath's for a libtool library"
install_libdir=$1
oldlibs=
if test -z "$rpath"; then
if test yes = "$build_libtool_libs"; then
# Building a libtool convenience library.
# Some compilers have problems with a '.al' extension so
# convenience libraries should have the same extension an
# archive normally would.
oldlibs="$output_objdir/$libname.$libext $oldlibs"
build_libtool_libs=convenience
build_old_libs=yes
fi
test -n "$vinfo" && \
func_warning "'-version-info/-version-number' is ignored for convenience libraries"
test -n "$release" && \
func_warning "'-release' is ignored for convenience libraries"
else
# Parse the version information argument.
save_ifs=$IFS; IFS=:
set dummy $vinfo 0 0 0
shift
IFS=$save_ifs
test -n "$7" && \
func_fatal_help "too many parameters to '-version-info'"
# convert absolute version numbers to libtool ages
# this retains compatibility with .la files and attempts
# to make the code below a bit more comprehensible
case $vinfo_number in
yes)
number_major=$1
number_minor=$2
number_revision=$3
#
# There are really only two kinds -- those that
# use the current revision as the major version
# and those that subtract age and use age as
# a minor version. But, then there is irix
# that has an extra 1 added just for fun
#
case $version_type in
# correct linux to gnu/linux during the next big refactor
darwin|freebsd-elf|linux|osf|windows|none)
func_arith $number_major + $number_minor
current=$func_arith_result
age=$number_minor
revision=$number_revision
;;
freebsd-aout|qnx|sunos)
current=$number_major
revision=$number_minor
age=0
;;
irix|nonstopux)
func_arith $number_major + $number_minor
current=$func_arith_result
age=$number_minor
revision=$number_minor
lt_irix_increment=no
;;
*)
func_fatal_configuration "$modename: unknown library version type '$version_type'"
;;
esac
;;
no)
current=$1
revision=$2
age=$3
;;
esac
# Check that each of the things are valid numbers.
case $current in
0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;;
*)
func_error "CURRENT '$current' must be a nonnegative integer"
func_fatal_error "'$vinfo' is not valid version information"
;;
esac
case $revision in
0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;;
*)
func_error "REVISION '$revision' must be a nonnegative integer"
func_fatal_error "'$vinfo' is not valid version information"
;;
esac
case $age in
0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;;
*)
func_error "AGE '$age' must be a nonnegative integer"
func_fatal_error "'$vinfo' is not valid version information"
;;
esac
if test "$age" -gt "$current"; then
func_error "AGE '$age' is greater than the current interface number '$current'"
func_fatal_error "'$vinfo' is not valid version information"
fi
# Calculate the version variables.
major=
versuffix=
verstring=
case $version_type in
none) ;;
darwin)
# Like Linux, but with the current version available in
# verstring for coding it into the library header
func_arith $current - $age
major=.$func_arith_result
versuffix=$major.$age.$revision
# Darwin ld doesn't like 0 for these options...
func_arith $current + 1
minor_current=$func_arith_result
xlcverstring="$wl-compatibility_version $wl$minor_current $wl-current_version $wl$minor_current.$revision"
verstring="-compatibility_version $minor_current -current_version $minor_current.$revision"
# On Darwin other compilers
case $CC in
nagfor*)
verstring="$wl-compatibility_version $wl$minor_current $wl-current_version $wl$minor_current.$revision"
;;
*)
verstring="-compatibility_version $minor_current -current_version $minor_current.$revision"
;;
esac
;;
freebsd-aout)
major=.$current
versuffix=.$current.$revision
;;
freebsd-elf)
func_arith $current - $age
major=.$func_arith_result
versuffix=$major.$age.$revision
;;
irix | nonstopux)
if test no = "$lt_irix_increment"; then
func_arith $current - $age
else
func_arith $current - $age + 1
fi
major=$func_arith_result
case $version_type in
nonstopux) verstring_prefix=nonstopux ;;
*) verstring_prefix=sgi ;;
esac
verstring=$verstring_prefix$major.$revision
# Add in all the interfaces that we are compatible with.
loop=$revision
while test 0 -ne "$loop"; do
func_arith $revision - $loop
iface=$func_arith_result
func_arith $loop - 1
loop=$func_arith_result
verstring=$verstring_prefix$major.$iface:$verstring
done
# Before this point, $major must not contain '.'.
major=.$major
versuffix=$major.$revision
;;
linux) # correct to gnu/linux during the next big refactor
func_arith $current - $age
major=.$func_arith_result
versuffix=$major.$age.$revision
;;
osf)
func_arith $current - $age
major=.$func_arith_result
versuffix=.$current.$age.$revision
verstring=$current.$age.$revision
# Add in all the interfaces that we are compatible with.
loop=$age
while test 0 -ne "$loop"; do
func_arith $current - $loop
iface=$func_arith_result
func_arith $loop - 1
loop=$func_arith_result
verstring=$verstring:$iface.0
done
# Make executables depend on our current version.
func_append verstring ":$current.0"
;;
qnx)
major=.$current
versuffix=.$current
;;
sco)
major=.$current
versuffix=.$current
;;
sunos)
major=.$current
versuffix=.$current.$revision
;;
windows)
# Use '-' rather than '.', since we only want one
# extension on DOS 8.3 file systems.
func_arith $current - $age
major=$func_arith_result
versuffix=-$major
;;
*)
func_fatal_configuration "unknown library version type '$version_type'"
;;
esac
# Clear the version info if we defaulted, and they specified a release.
if test -z "$vinfo" && test -n "$release"; then
major=
case $version_type in
darwin)
# we can't check for "0.0" in archive_cmds due to quoting
# problems, so we reset it completely
verstring=
;;
*)
verstring=0.0
;;
esac
if test no = "$need_version"; then
versuffix=
else
versuffix=.0.0
fi
fi
# Remove version info from name if versioning should be avoided
if test yes,no = "$avoid_version,$need_version"; then
major=
versuffix=
verstring=
fi
# Check to see if the archive will have undefined symbols.
if test yes = "$allow_undefined"; then
if test unsupported = "$allow_undefined_flag"; then
if test yes = "$build_old_libs"; then
func_warning "undefined symbols not allowed in $host shared libraries; building static only"
build_libtool_libs=no
else
func_fatal_error "can't build $host shared library unless -no-undefined is specified"
fi
fi
else
# Don't allow undefined symbols.
allow_undefined_flag=$no_undefined_flag
fi
fi
func_generate_dlsyms "$libname" "$libname" :
func_append libobjs " $symfileobj"
test " " = "$libobjs" && libobjs=
if test relink != "$opt_mode"; then
# Remove our outputs, but don't remove object files since they
# may have been created when compiling PIC objects.
removelist=
tempremovelist=`$ECHO "$output_objdir/*"`
for p in $tempremovelist; do
case $p in
*.$objext | *.gcno)
;;
$output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/$libname$release.*)
if test -n "$precious_files_regex"; then
if $ECHO "$p" | $EGREP -e "$precious_files_regex" >/dev/null 2>&1
then
continue
fi
fi
func_append removelist " $p"
;;
*) ;;
esac
done
test -n "$removelist" && \
func_show_eval "${RM}r \$removelist"
fi
# Now set the variables for building old libraries.
if test yes = "$build_old_libs" && test convenience != "$build_libtool_libs"; then
func_append oldlibs " $output_objdir/$libname.$libext"
# Transform .lo files to .o files.
oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.$libext$/d; $lo2o" | $NL2SP`
fi
# Eliminate all temporary directories.
#for path in $notinst_path; do
# lib_search_path=`$ECHO "$lib_search_path " | $SED "s% $path % %g"`
# deplibs=`$ECHO "$deplibs " | $SED "s% -L$path % %g"`
# dependency_libs=`$ECHO "$dependency_libs " | $SED "s% -L$path % %g"`
#done
if test -n "$xrpath"; then
# If the user specified any rpath flags, then add them.
temp_xrpath=
for libdir in $xrpath; do
func_replace_sysroot "$libdir"
func_append temp_xrpath " -R$func_replace_sysroot_result"
case "$finalize_rpath " in
*" $libdir "*) ;;
*) func_append finalize_rpath " $libdir" ;;
esac
done
if test yes != "$hardcode_into_libs" || test yes = "$build_old_libs"; then
dependency_libs="$temp_xrpath $dependency_libs"
fi
fi
# Make sure dlfiles contains only unique files that won't be dlpreopened
old_dlfiles=$dlfiles
dlfiles=
for lib in $old_dlfiles; do
case " $dlprefiles $dlfiles " in
*" $lib "*) ;;
*) func_append dlfiles " $lib" ;;
esac
done
# Make sure dlprefiles contains only unique files
old_dlprefiles=$dlprefiles
dlprefiles=
for lib in $old_dlprefiles; do
case "$dlprefiles " in
*" $lib "*) ;;
*) func_append dlprefiles " $lib" ;;
esac
done
if test yes = "$build_libtool_libs"; then
if test -n "$rpath"; then
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos* | *-cegcc* | *-*-haiku*)
# these systems don't actually have a c library (as such)!
;;
*-*-rhapsody* | *-*-darwin1.[012])
# Rhapsody C library is in the System framework
func_append deplibs " System.ltframework"
;;
*-*-netbsd*)
# Don't link with libc until the a.out ld.so is fixed.
;;
*-*-openbsd* | *-*-freebsd* | *-*-dragonfly*)
# Do not include libc due to us having libc/libc_r.
;;
*-*-sco3.2v5* | *-*-sco5v6*)
# Causes problems with __ctype
;;
*-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*)
# Compiler inserts libc in the correct place for threads to work
;;
*)
# Add libc to deplibs on all other systems if necessary.
if test yes = "$build_libtool_need_lc"; then
func_append deplibs " -lc"
fi
;;
esac
fi
# Transform deplibs into only deplibs that can be linked in shared.
name_save=$name
libname_save=$libname
release_save=$release
versuffix_save=$versuffix
major_save=$major
# I'm not sure if I'm treating the release correctly. I think
# release should show up in the -l (ie -lgmp5) so we don't want to
# add it in twice. Is that correct?
release=
versuffix=
major=
newdeplibs=
droppeddeps=no
case $deplibs_check_method in
pass_all)
# Don't check for shared/static. Everything works.
# This might be a little naive. We might want to check
# whether the library exists or not. But this is on
# osf3 & osf4 and I'm not really sure... Just
# implementing what was already the behavior.
newdeplibs=$deplibs
;;
test_compile)
# This code stresses the "libraries are programs" paradigm to its
# limits. Maybe even breaks it. We compile a program, linking it
# against the deplibs as a proxy for the library. Then we can check
# whether they linked in statically or dynamically with ldd.
$opt_dry_run || $RM conftest.c
cat > conftest.c </dev/null`
$nocaseglob
else
potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
fi
for potent_lib in $potential_libs; do
# Follow soft links.
if ls -lLd "$potent_lib" 2>/dev/null |
$GREP " -> " >/dev/null; then
continue
fi
# The statement above tries to avoid entering an
# endless loop below, in case of cyclic links.
# We might still enter an endless loop, since a link
# loop can be closed while we follow links,
# but so what?
potlib=$potent_lib
while test -h "$potlib" 2>/dev/null; do
potliblink=`ls -ld $potlib | $SED 's/.* -> //'`
case $potliblink in
[\\/]* | [A-Za-z]:[\\/]*) potlib=$potliblink;;
*) potlib=`$ECHO "$potlib" | $SED 's|[^/]*$||'`"$potliblink";;
esac
done
if eval $file_magic_cmd \"\$potlib\" 2>/dev/null |
$SED -e 10q |
$EGREP "$file_magic_regex" > /dev/null; then
func_append newdeplibs " $a_deplib"
a_deplib=
break 2
fi
done
done
fi
if test -n "$a_deplib"; then
droppeddeps=yes
echo
$ECHO "*** Warning: linker path does not have real file for library $a_deplib."
echo "*** I have the capability to make that library automatically link in when"
echo "*** you link to this library. But I can only do this if you have a"
echo "*** shared version of the library, which you do not appear to have"
echo "*** because I did check the linker path looking for a file starting"
if test -z "$potlib"; then
$ECHO "*** with $libname but no candidates were found. (...for file magic test)"
else
$ECHO "*** with $libname and none of the candidates passed a file format test"
$ECHO "*** using a file magic. Last file checked: $potlib"
fi
fi
;;
*)
# Add a -L argument.
func_append newdeplibs " $a_deplib"
;;
esac
done # Gone through all deplibs.
;;
match_pattern*)
set dummy $deplibs_check_method; shift
match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"`
for a_deplib in $deplibs; do
case $a_deplib in
-l*)
func_stripname -l '' "$a_deplib"
name=$func_stripname_result
if test yes = "$allow_libtool_libs_with_static_runtimes"; then
case " $predeps $postdeps " in
*" $a_deplib "*)
func_append newdeplibs " $a_deplib"
a_deplib=
;;
esac
fi
if test -n "$a_deplib"; then
libname=`eval "\\$ECHO \"$libname_spec\""`
for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
for potent_lib in $potential_libs; do
potlib=$potent_lib # see symlink-check above in file_magic test
if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \
$EGREP "$match_pattern_regex" > /dev/null; then
func_append newdeplibs " $a_deplib"
a_deplib=
break 2
fi
done
done
fi
if test -n "$a_deplib"; then
droppeddeps=yes
echo
$ECHO "*** Warning: linker path does not have real file for library $a_deplib."
echo "*** I have the capability to make that library automatically link in when"
echo "*** you link to this library. But I can only do this if you have a"
echo "*** shared version of the library, which you do not appear to have"
echo "*** because I did check the linker path looking for a file starting"
if test -z "$potlib"; then
$ECHO "*** with $libname but no candidates were found. (...for regex pattern test)"
else
$ECHO "*** with $libname and none of the candidates passed a file format test"
$ECHO "*** using a regex pattern. Last file checked: $potlib"
fi
fi
;;
*)
# Add a -L argument.
func_append newdeplibs " $a_deplib"
;;
esac
done # Gone through all deplibs.
;;
none | unknown | *)
newdeplibs=
tmp_deplibs=`$ECHO " $deplibs" | $SED 's/ -lc$//; s/ -[LR][^ ]*//g'`
if test yes = "$allow_libtool_libs_with_static_runtimes"; then
for i in $predeps $postdeps; do
# can't use Xsed below, because $i might contain '/'
tmp_deplibs=`$ECHO " $tmp_deplibs" | $SED "s|$i||"`
done
fi
case $tmp_deplibs in
*[!\ \ ]*)
echo
if test none = "$deplibs_check_method"; then
echo "*** Warning: inter-library dependencies are not supported in this platform."
else
echo "*** Warning: inter-library dependencies are not known to be supported."
fi
echo "*** All declared inter-library dependencies are being dropped."
droppeddeps=yes
;;
esac
;;
esac
versuffix=$versuffix_save
major=$major_save
release=$release_save
libname=$libname_save
name=$name_save
case $host in
*-*-rhapsody* | *-*-darwin1.[012])
# On Rhapsody replace the C library with the System framework
newdeplibs=`$ECHO " $newdeplibs" | $SED 's/ -lc / System.ltframework /'`
;;
esac
if test yes = "$droppeddeps"; then
if test yes = "$module"; then
echo
echo "*** Warning: libtool could not satisfy all declared inter-library"
$ECHO "*** dependencies of module $libname. Therefore, libtool will create"
echo "*** a static module, that should work as long as the dlopening"
echo "*** application is linked with the -dlopen flag."
if test -z "$global_symbol_pipe"; then
echo
echo "*** However, this would only work if libtool was able to extract symbol"
echo "*** lists from a program, using 'nm' or equivalent, but libtool could"
echo "*** not find such a program. So, this module is probably useless."
echo "*** 'nm' from GNU binutils and a full rebuild may help."
fi
if test no = "$build_old_libs"; then
oldlibs=$output_objdir/$libname.$libext
build_libtool_libs=module
build_old_libs=yes
else
build_libtool_libs=no
fi
else
echo "*** The inter-library dependencies that have been dropped here will be"
echo "*** automatically added whenever a program is linked with this library"
echo "*** or is declared to -dlopen it."
if test no = "$allow_undefined"; then
echo
echo "*** Since this library must not contain undefined symbols,"
echo "*** because either the platform does not support them or"
echo "*** it was explicitly requested with -no-undefined,"
echo "*** libtool will only create a static version of it."
if test no = "$build_old_libs"; then
oldlibs=$output_objdir/$libname.$libext
build_libtool_libs=module
build_old_libs=yes
else
build_libtool_libs=no
fi
fi
fi
fi
# Done checking deplibs!
deplibs=$newdeplibs
fi
# Time to change all our "foo.ltframework" stuff back to "-framework foo"
case $host in
*-*-darwin*)
newdeplibs=`$ECHO " $newdeplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
new_inherited_linker_flags=`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
deplibs=`$ECHO " $deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
;;
esac
# move library search paths that coincide with paths to not yet
# installed libraries to the beginning of the library search list
new_libs=
for path in $notinst_path; do
case " $new_libs " in
*" -L$path/$objdir "*) ;;
*)
case " $deplibs " in
*" -L$path/$objdir "*)
func_append new_libs " -L$path/$objdir" ;;
esac
;;
esac
done
for deplib in $deplibs; do
case $deplib in
-L*)
case " $new_libs " in
*" $deplib "*) ;;
*) func_append new_libs " $deplib" ;;
esac
;;
*) func_append new_libs " $deplib" ;;
esac
done
deplibs=$new_libs
# All the library-specific variables (install_libdir is set above).
library_names=
old_library=
dlname=
# Test again, we may have decided not to build it any more
if test yes = "$build_libtool_libs"; then
# Remove $wl instances when linking with ld.
# FIXME: should test the right _cmds variable.
case $archive_cmds in
*\$LD\ *) wl= ;;
esac
if test yes = "$hardcode_into_libs"; then
# Hardcode the library paths
hardcode_libdirs=
dep_rpath=
rpath=$finalize_rpath
test relink = "$opt_mode" || rpath=$compile_rpath$rpath
for libdir in $rpath; do
if test -n "$hardcode_libdir_flag_spec"; then
if test -n "$hardcode_libdir_separator"; then
func_replace_sysroot "$libdir"
libdir=$func_replace_sysroot_result
if test -z "$hardcode_libdirs"; then
hardcode_libdirs=$libdir
else
# Just accumulate the unique libdirs.
case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in
*"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
;;
*)
func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
;;
esac
fi
else
eval flag=\"$hardcode_libdir_flag_spec\"
func_append dep_rpath " $flag"
fi
elif test -n "$runpath_var"; then
case "$perm_rpath " in
*" $libdir "*) ;;
*) func_append perm_rpath " $libdir" ;;
esac
fi
done
# Substitute the hardcoded libdirs into the rpath.
if test -n "$hardcode_libdir_separator" &&
test -n "$hardcode_libdirs"; then
libdir=$hardcode_libdirs
eval "dep_rpath=\"$hardcode_libdir_flag_spec\""
fi
if test -n "$runpath_var" && test -n "$perm_rpath"; then
# We should set the runpath_var.
rpath=
for dir in $perm_rpath; do
func_append rpath "$dir:"
done
eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var"
fi
test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs"
fi
shlibpath=$finalize_shlibpath
test relink = "$opt_mode" || shlibpath=$compile_shlibpath$shlibpath
if test -n "$shlibpath"; then
eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var"
fi
# Get the real and link names of the library.
eval shared_ext=\"$shrext_cmds\"
eval library_names=\"$library_names_spec\"
set dummy $library_names
shift
realname=$1
shift
if test -n "$soname_spec"; then
eval soname=\"$soname_spec\"
else
soname=$realname
fi
if test -z "$dlname"; then
dlname=$soname
fi
lib=$output_objdir/$realname
linknames=
for link
do
func_append linknames " $link"
done
# Use standard objects if they are pic
test -z "$pic_flag" && libobjs=`$ECHO "$libobjs" | $SP2NL | $SED "$lo2o" | $NL2SP`
test "X$libobjs" = "X " && libobjs=
delfiles=
if test -n "$export_symbols" && test -n "$include_expsyms"; then
$opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp"
export_symbols=$output_objdir/$libname.uexp
func_append delfiles " $export_symbols"
fi
orig_export_symbols=
case $host_os in
cygwin* | mingw* | cegcc*)
if test -n "$export_symbols" && test -z "$export_symbols_regex"; then
# exporting using user supplied symfile
func_dll_def_p "$export_symbols" || {
# and it's NOT already a .def file. Must figure out
# which of the given symbols are data symbols and tag
# them as such. So, trigger use of export_symbols_cmds.
# export_symbols gets reassigned inside the "prepare
# the list of exported symbols" if statement, so the
# include_expsyms logic still works.
orig_export_symbols=$export_symbols
export_symbols=
always_export_symbols=yes
}
fi
;;
esac
# Prepare the list of exported symbols
if test -z "$export_symbols"; then
if test yes = "$always_export_symbols" || test -n "$export_symbols_regex"; then
func_verbose "generating symbol list for '$libname.la'"
export_symbols=$output_objdir/$libname.exp
$opt_dry_run || $RM $export_symbols
cmds=$export_symbols_cmds
save_ifs=$IFS; IFS='~'
for cmd1 in $cmds; do
IFS=$save_ifs
# Take the normal branch if the nm_file_list_spec branch
# doesn't work or if tool conversion is not needed.
case $nm_file_list_spec~$to_tool_file_cmd in
*~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*)
try_normal_branch=yes
eval cmd=\"$cmd1\"
func_len " $cmd"
len=$func_len_result
;;
*)
try_normal_branch=no
;;
esac
if test yes = "$try_normal_branch" \
&& { test "$len" -lt "$max_cmd_len" \
|| test "$max_cmd_len" -le -1; }
then
func_show_eval "$cmd" 'exit $?'
skipped_export=false
elif test -n "$nm_file_list_spec"; then
func_basename "$output"
output_la=$func_basename_result
save_libobjs=$libobjs
save_output=$output
output=$output_objdir/$output_la.nm
func_to_tool_file "$output"
libobjs=$nm_file_list_spec$func_to_tool_file_result
func_append delfiles " $output"
func_verbose "creating $NM input file list: $output"
for obj in $save_libobjs; do
func_to_tool_file "$obj"
$ECHO "$func_to_tool_file_result"
done > "$output"
eval cmd=\"$cmd1\"
func_show_eval "$cmd" 'exit $?'
output=$save_output
libobjs=$save_libobjs
skipped_export=false
else
# The command line is too long to execute in one step.
func_verbose "using reloadable object file for export list..."
skipped_export=:
# Break out early, otherwise skipped_export may be
# set to false by a later but shorter cmd.
break
fi
done
IFS=$save_ifs
if test -n "$export_symbols_regex" && test : != "$skipped_export"; then
func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"'
func_show_eval '$MV "${export_symbols}T" "$export_symbols"'
fi
fi
fi
if test -n "$export_symbols" && test -n "$include_expsyms"; then
tmp_export_symbols=$export_symbols
test -n "$orig_export_symbols" && tmp_export_symbols=$orig_export_symbols
$opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
fi
if test : != "$skipped_export" && test -n "$orig_export_symbols"; then
# The given exports_symbols file has to be filtered, so filter it.
func_verbose "filter symbol list for '$libname.la' to tag DATA exports"
# FIXME: $output_objdir/$libname.filter potentially contains lots of
# 's' commands, which not all seds can handle. GNU sed should be fine
# though. Also, the filter scales superlinearly with the number of
# global variables. join(1) would be nice here, but unfortunately
# isn't a blessed tool.
$opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
func_append delfiles " $export_symbols $output_objdir/$libname.filter"
export_symbols=$output_objdir/$libname.def
$opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
fi
tmp_deplibs=
for test_deplib in $deplibs; do
case " $convenience " in
*" $test_deplib "*) ;;
*)
func_append tmp_deplibs " $test_deplib"
;;
esac
done
deplibs=$tmp_deplibs
if test -n "$convenience"; then
if test -n "$whole_archive_flag_spec" &&
test yes = "$compiler_needs_object" &&
test -z "$libobjs"; then
# extract the archives, so we have objects to list.
# TODO: could optimize this to just extract one archive.
whole_archive_flag_spec=
fi
if test -n "$whole_archive_flag_spec"; then
save_libobjs=$libobjs
eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
test "X$libobjs" = "X " && libobjs=
else
gentop=$output_objdir/${outputname}x
func_append generated " $gentop"
func_extract_archives $gentop $convenience
func_append libobjs " $func_extract_archives_result"
test "X$libobjs" = "X " && libobjs=
fi
fi
if test yes = "$thread_safe" && test -n "$thread_safe_flag_spec"; then
eval flag=\"$thread_safe_flag_spec\"
func_append linker_flags " $flag"
fi
# Make a backup of the uninstalled library when relinking
if test relink = "$opt_mode"; then
$opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $?
fi
# Do each of the archive commands.
if test yes = "$module" && test -n "$module_cmds"; then
if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
eval test_cmds=\"$module_expsym_cmds\"
cmds=$module_expsym_cmds
else
eval test_cmds=\"$module_cmds\"
cmds=$module_cmds
fi
else
if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
eval test_cmds=\"$archive_expsym_cmds\"
cmds=$archive_expsym_cmds
else
eval test_cmds=\"$archive_cmds\"
cmds=$archive_cmds
fi
fi
if test : != "$skipped_export" &&
func_len " $test_cmds" &&
len=$func_len_result &&
test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
:
else
# The command line is too long to link in one step, link piecewise
# or, if using GNU ld and skipped_export is not :, use a linker
# script.
# Save the value of $output and $libobjs because we want to
# use them later. If we have whole_archive_flag_spec, we
# want to use save_libobjs as it was before
# whole_archive_flag_spec was expanded, because we can't
# assume the linker understands whole_archive_flag_spec.
# This may have to be revisited, in case too many
# convenience libraries get linked in and end up exceeding
# the spec.
if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then
save_libobjs=$libobjs
fi
save_output=$output
func_basename "$output"
output_la=$func_basename_result
# Clear the reloadable object creation command queue and
# initialize k to one.
test_cmds=
concat_cmds=
objlist=
last_robj=
k=1
if test -n "$save_libobjs" && test : != "$skipped_export" && test yes = "$with_gnu_ld"; then
output=$output_objdir/$output_la.lnkscript
func_verbose "creating GNU ld script: $output"
echo 'INPUT (' > $output
for obj in $save_libobjs
do
func_to_tool_file "$obj"
$ECHO "$func_to_tool_file_result" >> $output
done
echo ')' >> $output
func_append delfiles " $output"
func_to_tool_file "$output"
output=$func_to_tool_file_result
elif test -n "$save_libobjs" && test : != "$skipped_export" && test -n "$file_list_spec"; then
output=$output_objdir/$output_la.lnk
func_verbose "creating linker input file list: $output"
: > $output
set x $save_libobjs
shift
firstobj=
if test yes = "$compiler_needs_object"; then
firstobj="$1 "
shift
fi
for obj
do
func_to_tool_file "$obj"
$ECHO "$func_to_tool_file_result" >> $output
done
func_append delfiles " $output"
func_to_tool_file "$output"
output=$firstobj\"$file_list_spec$func_to_tool_file_result\"
else
if test -n "$save_libobjs"; then
func_verbose "creating reloadable object files..."
output=$output_objdir/$output_la-$k.$objext
eval test_cmds=\"$reload_cmds\"
func_len " $test_cmds"
len0=$func_len_result
len=$len0
# Loop over the list of objects to be linked.
for obj in $save_libobjs
do
func_len " $obj"
func_arith $len + $func_len_result
len=$func_arith_result
if test -z "$objlist" ||
test "$len" -lt "$max_cmd_len"; then
func_append objlist " $obj"
else
# The command $test_cmds is almost too long, add a
# command to the queue.
if test 1 -eq "$k"; then
# The first file doesn't have a previous command to add.
reload_objs=$objlist
eval concat_cmds=\"$reload_cmds\"
else
# All subsequent reloadable object files will link in
# the last one created.
reload_objs="$objlist $last_robj"
eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\"
fi
last_robj=$output_objdir/$output_la-$k.$objext
func_arith $k + 1
k=$func_arith_result
output=$output_objdir/$output_la-$k.$objext
objlist=" $obj"
func_len " $last_robj"
func_arith $len0 + $func_len_result
len=$func_arith_result
fi
done
# Handle the remaining objects by creating one last
# reloadable object file. All subsequent reloadable object
# files will link in the last one created.
test -z "$concat_cmds" || concat_cmds=$concat_cmds~
reload_objs="$objlist $last_robj"
eval concat_cmds=\"\$concat_cmds$reload_cmds\"
if test -n "$last_robj"; then
eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\"
fi
func_append delfiles " $output"
else
output=
fi
${skipped_export-false} && {
func_verbose "generating symbol list for '$libname.la'"
export_symbols=$output_objdir/$libname.exp
$opt_dry_run || $RM $export_symbols
libobjs=$output
# Append the command to create the export file.
test -z "$concat_cmds" || concat_cmds=$concat_cmds~
eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\"
if test -n "$last_robj"; then
eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\"
fi
}
test -n "$save_libobjs" &&
func_verbose "creating a temporary reloadable object file: $output"
# Loop through the commands generated above and execute them.
save_ifs=$IFS; IFS='~'
for cmd in $concat_cmds; do
IFS=$save_ifs
$opt_quiet || {
func_quote_for_expand "$cmd"
eval "func_echo $func_quote_for_expand_result"
}
$opt_dry_run || eval "$cmd" || {
lt_exit=$?
# Restore the uninstalled library and exit
if test relink = "$opt_mode"; then
( cd "$output_objdir" && \
$RM "${realname}T" && \
$MV "${realname}U" "$realname" )
fi
exit $lt_exit
}
done
IFS=$save_ifs
if test -n "$export_symbols_regex" && ${skipped_export-false}; then
func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"'
func_show_eval '$MV "${export_symbols}T" "$export_symbols"'
fi
fi
${skipped_export-false} && {
if test -n "$export_symbols" && test -n "$include_expsyms"; then
tmp_export_symbols=$export_symbols
test -n "$orig_export_symbols" && tmp_export_symbols=$orig_export_symbols
$opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
fi
if test -n "$orig_export_symbols"; then
# The given exports_symbols file has to be filtered, so filter it.
func_verbose "filter symbol list for '$libname.la' to tag DATA exports"
# FIXME: $output_objdir/$libname.filter potentially contains lots of
# 's' commands, which not all seds can handle. GNU sed should be fine
# though. Also, the filter scales superlinearly with the number of
# global variables. join(1) would be nice here, but unfortunately
# isn't a blessed tool.
$opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
func_append delfiles " $export_symbols $output_objdir/$libname.filter"
export_symbols=$output_objdir/$libname.def
$opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
fi
}
libobjs=$output
# Restore the value of output.
output=$save_output
if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then
eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
test "X$libobjs" = "X " && libobjs=
fi
# Expand the library linking commands again to reset the
# value of $libobjs for piecewise linking.
# Do each of the archive commands.
if test yes = "$module" && test -n "$module_cmds"; then
if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
cmds=$module_expsym_cmds
else
cmds=$module_cmds
fi
else
if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
cmds=$archive_expsym_cmds
else
cmds=$archive_cmds
fi
fi
fi
if test -n "$delfiles"; then
# Append the command to remove temporary files to $cmds.
eval cmds=\"\$cmds~\$RM $delfiles\"
fi
# Add any objects from preloaded convenience libraries
if test -n "$dlprefiles"; then
gentop=$output_objdir/${outputname}x
func_append generated " $gentop"
func_extract_archives $gentop $dlprefiles
func_append libobjs " $func_extract_archives_result"
test "X$libobjs" = "X " && libobjs=
fi
save_ifs=$IFS; IFS='~'
for cmd in $cmds; do
IFS=$sp$nl
eval cmd=\"$cmd\"
IFS=$save_ifs
$opt_quiet || {
func_quote_for_expand "$cmd"
eval "func_echo $func_quote_for_expand_result"
}
$opt_dry_run || eval "$cmd" || {
lt_exit=$?
# Restore the uninstalled library and exit
if test relink = "$opt_mode"; then
( cd "$output_objdir" && \
$RM "${realname}T" && \
$MV "${realname}U" "$realname" )
fi
exit $lt_exit
}
done
IFS=$save_ifs
# Restore the uninstalled library and exit
if test relink = "$opt_mode"; then
$opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $?
if test -n "$convenience"; then
if test -z "$whole_archive_flag_spec"; then
func_show_eval '${RM}r "$gentop"'
fi
fi
exit $EXIT_SUCCESS
fi
# Create links to the real library.
for linkname in $linknames; do
if test "$realname" != "$linkname"; then
func_show_eval '(cd "$output_objdir" && $RM "$linkname" && $LN_S "$realname" "$linkname")' 'exit $?'
fi
done
# If -module or -export-dynamic was specified, set the dlname.
if test yes = "$module" || test yes = "$export_dynamic"; then
# On all known operating systems, these are identical.
dlname=$soname
fi
fi
;;
obj)
if test -n "$dlfiles$dlprefiles" || test no != "$dlself"; then
func_warning "'-dlopen' is ignored for objects"
fi
case " $deplibs" in
*\ -l* | *\ -L*)
func_warning "'-l' and '-L' are ignored for objects" ;;
esac
test -n "$rpath" && \
func_warning "'-rpath' is ignored for objects"
test -n "$xrpath" && \
func_warning "'-R' is ignored for objects"
test -n "$vinfo" && \
func_warning "'-version-info' is ignored for objects"
test -n "$release" && \
func_warning "'-release' is ignored for objects"
case $output in
*.lo)
test -n "$objs$old_deplibs" && \
func_fatal_error "cannot build library object '$output' from non-libtool objects"
libobj=$output
func_lo2o "$libobj"
obj=$func_lo2o_result
;;
*)
libobj=
obj=$output
;;
esac
# Delete the old objects.
$opt_dry_run || $RM $obj $libobj
# Objects from convenience libraries. This assumes
# single-version convenience libraries. Whenever we create
# different ones for PIC/non-PIC, this we'll have to duplicate
# the extraction.
reload_conv_objs=
gentop=
# if reload_cmds runs $LD directly, get rid of -Wl from
# whole_archive_flag_spec and hope we can get by with turning comma
# into space.
case $reload_cmds in
*\$LD[\ \$]*) wl= ;;
esac
if test -n "$convenience"; then
if test -n "$whole_archive_flag_spec"; then
eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\"
test -n "$wl" || tmp_whole_archive_flags=`$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'`
reload_conv_objs=$reload_objs\ $tmp_whole_archive_flags
else
gentop=$output_objdir/${obj}x
func_append generated " $gentop"
func_extract_archives $gentop $convenience
reload_conv_objs="$reload_objs $func_extract_archives_result"
fi
fi
# If we're not building shared, we need to use non_pic_objs
test yes = "$build_libtool_libs" || libobjs=$non_pic_objects
# Create the old-style object.
reload_objs=$objs$old_deplibs' '`$ECHO "$libobjs" | $SP2NL | $SED "/\.$libext$/d; /\.lib$/d; $lo2o" | $NL2SP`' '$reload_conv_objs
output=$obj
func_execute_cmds "$reload_cmds" 'exit $?'
# Exit if we aren't doing a library object file.
if test -z "$libobj"; then
if test -n "$gentop"; then
func_show_eval '${RM}r "$gentop"'
fi
exit $EXIT_SUCCESS
fi
test yes = "$build_libtool_libs" || {
if test -n "$gentop"; then
func_show_eval '${RM}r "$gentop"'
fi
# Create an invalid libtool object if no PIC, so that we don't
# accidentally link it into a program.
# $show "echo timestamp > $libobj"
# $opt_dry_run || eval "echo timestamp > $libobj" || exit $?
exit $EXIT_SUCCESS
}
if test -n "$pic_flag" || test default != "$pic_mode"; then
# Only do commands if we really have different PIC objects.
reload_objs="$libobjs $reload_conv_objs"
output=$libobj
func_execute_cmds "$reload_cmds" 'exit $?'
fi
if test -n "$gentop"; then
func_show_eval '${RM}r "$gentop"'
fi
exit $EXIT_SUCCESS
;;
prog)
case $host in
*cygwin*) func_stripname '' '.exe' "$output"
output=$func_stripname_result.exe;;
esac
test -n "$vinfo" && \
func_warning "'-version-info' is ignored for programs"
test -n "$release" && \
func_warning "'-release' is ignored for programs"
$preload \
&& test unknown,unknown,unknown = "$dlopen_support,$dlopen_self,$dlopen_self_static" \
&& func_warning "'LT_INIT([dlopen])' not used. Assuming no dlopen support."
case $host in
*-*-rhapsody* | *-*-darwin1.[012])
# On Rhapsody replace the C library is the System framework
compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's/ -lc / System.ltframework /'`
finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's/ -lc / System.ltframework /'`
;;
esac
case $host in
*-*-darwin*)
# Don't allow lazy linking, it breaks C++ global constructors
# But is supposedly fixed on 10.4 or later (yay!).
if test CXX = "$tagname"; then
case ${MACOSX_DEPLOYMENT_TARGET-10.0} in
10.[0123])
func_append compile_command " $wl-bind_at_load"
func_append finalize_command " $wl-bind_at_load"
;;
esac
fi
# Time to change all our "foo.ltframework" stuff back to "-framework foo"
compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'`
;;
esac
# move library search paths that coincide with paths to not yet
# installed libraries to the beginning of the library search list
new_libs=
for path in $notinst_path; do
case " $new_libs " in
*" -L$path/$objdir "*) ;;
*)
case " $compile_deplibs " in
*" -L$path/$objdir "*)
func_append new_libs " -L$path/$objdir" ;;
esac
;;
esac
done
for deplib in $compile_deplibs; do
case $deplib in
-L*)
case " $new_libs " in
*" $deplib "*) ;;
*) func_append new_libs " $deplib" ;;
esac
;;
*) func_append new_libs " $deplib" ;;
esac
done
compile_deplibs=$new_libs
func_append compile_command " $compile_deplibs"
func_append finalize_command " $finalize_deplibs"
if test -n "$rpath$xrpath"; then
# If the user specified any rpath flags, then add them.
for libdir in $rpath $xrpath; do
# This is the magic to use -rpath.
case "$finalize_rpath " in
*" $libdir "*) ;;
*) func_append finalize_rpath " $libdir" ;;
esac
done
fi
# Now hardcode the library paths
rpath=
hardcode_libdirs=
for libdir in $compile_rpath $finalize_rpath; do
if test -n "$hardcode_libdir_flag_spec"; then
if test -n "$hardcode_libdir_separator"; then
if test -z "$hardcode_libdirs"; then
hardcode_libdirs=$libdir
else
# Just accumulate the unique libdirs.
case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in
*"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
;;
*)
func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
;;
esac
fi
else
eval flag=\"$hardcode_libdir_flag_spec\"
func_append rpath " $flag"
fi
elif test -n "$runpath_var"; then
case "$perm_rpath " in
*" $libdir "*) ;;
*) func_append perm_rpath " $libdir" ;;
esac
fi
case $host in
*-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*)
testbindir=`$ECHO "$libdir" | $SED -e 's*/lib$*/bin*'`
case :$dllsearchpath: in
*":$libdir:"*) ;;
::) dllsearchpath=$libdir;;
*) func_append dllsearchpath ":$libdir";;
esac
case :$dllsearchpath: in
*":$testbindir:"*) ;;
::) dllsearchpath=$testbindir;;
*) func_append dllsearchpath ":$testbindir";;
esac
;;
esac
done
# Substitute the hardcoded libdirs into the rpath.
if test -n "$hardcode_libdir_separator" &&
test -n "$hardcode_libdirs"; then
libdir=$hardcode_libdirs
eval rpath=\" $hardcode_libdir_flag_spec\"
fi
compile_rpath=$rpath
rpath=
hardcode_libdirs=
for libdir in $finalize_rpath; do
if test -n "$hardcode_libdir_flag_spec"; then
if test -n "$hardcode_libdir_separator"; then
if test -z "$hardcode_libdirs"; then
hardcode_libdirs=$libdir
else
# Just accumulate the unique libdirs.
case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in
*"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
;;
*)
func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
;;
esac
fi
else
eval flag=\"$hardcode_libdir_flag_spec\"
func_append rpath " $flag"
fi
elif test -n "$runpath_var"; then
case "$finalize_perm_rpath " in
*" $libdir "*) ;;
*) func_append finalize_perm_rpath " $libdir" ;;
esac
fi
done
# Substitute the hardcoded libdirs into the rpath.
if test -n "$hardcode_libdir_separator" &&
test -n "$hardcode_libdirs"; then
libdir=$hardcode_libdirs
eval rpath=\" $hardcode_libdir_flag_spec\"
fi
finalize_rpath=$rpath
if test -n "$libobjs" && test yes = "$build_old_libs"; then
# Transform all the library objects into standard objects.
compile_command=`$ECHO "$compile_command" | $SP2NL | $SED "$lo2o" | $NL2SP`
finalize_command=`$ECHO "$finalize_command" | $SP2NL | $SED "$lo2o" | $NL2SP`
fi
func_generate_dlsyms "$outputname" "@PROGRAM@" false
# template prelinking step
if test -n "$prelink_cmds"; then
func_execute_cmds "$prelink_cmds" 'exit $?'
fi
wrappers_required=:
case $host in
*cegcc* | *mingw32ce*)
# Disable wrappers for cegcc and mingw32ce hosts, we are cross compiling anyway.
wrappers_required=false
;;
*cygwin* | *mingw* )
test yes = "$build_libtool_libs" || wrappers_required=false
;;
*)
if test no = "$need_relink" || test yes != "$build_libtool_libs"; then
wrappers_required=false
fi
;;
esac
$wrappers_required || {
# Replace the output file specification.
compile_command=`$ECHO "$compile_command" | $SED 's%@OUTPUT@%'"$output"'%g'`
link_command=$compile_command$compile_rpath
# We have no uninstalled library dependencies, so finalize right now.
exit_status=0
func_show_eval "$link_command" 'exit_status=$?'
if test -n "$postlink_cmds"; then
func_to_tool_file "$output"
postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
func_execute_cmds "$postlink_cmds" 'exit $?'
fi
# Delete the generated files.
if test -f "$output_objdir/${outputname}S.$objext"; then
func_show_eval '$RM "$output_objdir/${outputname}S.$objext"'
fi
exit $exit_status
}
if test -n "$compile_shlibpath$finalize_shlibpath"; then
compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command"
fi
if test -n "$finalize_shlibpath"; then
finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command"
fi
compile_var=
finalize_var=
if test -n "$runpath_var"; then
if test -n "$perm_rpath"; then
# We should set the runpath_var.
rpath=
for dir in $perm_rpath; do
func_append rpath "$dir:"
done
compile_var="$runpath_var=\"$rpath\$$runpath_var\" "
fi
if test -n "$finalize_perm_rpath"; then
# We should set the runpath_var.
rpath=
for dir in $finalize_perm_rpath; do
func_append rpath "$dir:"
done
finalize_var="$runpath_var=\"$rpath\$$runpath_var\" "
fi
fi
if test yes = "$no_install"; then
# We don't need to create a wrapper script.
link_command=$compile_var$compile_command$compile_rpath
# Replace the output file specification.
link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output"'%g'`
# Delete the old output file.
$opt_dry_run || $RM $output
# Link the executable and exit
func_show_eval "$link_command" 'exit $?'
if test -n "$postlink_cmds"; then
func_to_tool_file "$output"
postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
func_execute_cmds "$postlink_cmds" 'exit $?'
fi
exit $EXIT_SUCCESS
fi
case $hardcode_action,$fast_install in
relink,*)
# Fast installation is not supported
link_command=$compile_var$compile_command$compile_rpath
relink_command=$finalize_var$finalize_command$finalize_rpath
func_warning "this platform does not like uninstalled shared libraries"
func_warning "'$output' will be relinked during installation"
;;
*,yes)
link_command=$finalize_var$compile_command$finalize_rpath
relink_command=`$ECHO "$compile_var$compile_command$compile_rpath" | $SED 's%@OUTPUT@%\$progdir/\$file%g'`
;;
*,no)
link_command=$compile_var$compile_command$compile_rpath
relink_command=$finalize_var$finalize_command$finalize_rpath
;;
*,needless)
link_command=$finalize_var$compile_command$finalize_rpath
relink_command=
;;
esac
# Replace the output file specification.
link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'`
# Delete the old output files.
$opt_dry_run || $RM $output $output_objdir/$outputname $output_objdir/lt-$outputname
func_show_eval "$link_command" 'exit $?'
if test -n "$postlink_cmds"; then
func_to_tool_file "$output_objdir/$outputname"
postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
func_execute_cmds "$postlink_cmds" 'exit $?'
fi
# Now create the wrapper script.
func_verbose "creating $output"
# Quote the relink command for shipping.
if test -n "$relink_command"; then
# Preserve any variables that may affect compiler behavior
for var in $variables_saved_for_relink; do
if eval test -z \"\${$var+set}\"; then
relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command"
elif eval var_value=\$$var; test -z "$var_value"; then
relink_command="$var=; export $var; $relink_command"
else
func_quote_for_eval "$var_value"
relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command"
fi
done
relink_command="(cd `pwd`; $relink_command)"
relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"`
fi
# Only actually do things if not in dry run mode.
$opt_dry_run || {
# win32 will think the script is a binary if it has
# a .exe suffix, so we strip it off here.
case $output in
*.exe) func_stripname '' '.exe' "$output"
output=$func_stripname_result ;;
esac
# test for cygwin because mv fails w/o .exe extensions
case $host in
*cygwin*)
exeext=.exe
func_stripname '' '.exe' "$outputname"
outputname=$func_stripname_result ;;
*) exeext= ;;
esac
case $host in
*cygwin* | *mingw* )
func_dirname_and_basename "$output" "" "."
output_name=$func_basename_result
output_path=$func_dirname_result
cwrappersource=$output_path/$objdir/lt-$output_name.c
cwrapper=$output_path/$output_name.exe
$RM $cwrappersource $cwrapper
trap "$RM $cwrappersource $cwrapper; exit $EXIT_FAILURE" 1 2 15
func_emit_cwrapperexe_src > $cwrappersource
# The wrapper executable is built using the $host compiler,
# because it contains $host paths and files. If cross-
# compiling, it, like the target executable, must be
# executed on the $host or under an emulation environment.
$opt_dry_run || {
$LTCC $LTCFLAGS -o $cwrapper $cwrappersource
$STRIP $cwrapper
}
# Now, create the wrapper script for func_source use:
func_ltwrapper_scriptname $cwrapper
$RM $func_ltwrapper_scriptname_result
trap "$RM $func_ltwrapper_scriptname_result; exit $EXIT_FAILURE" 1 2 15
$opt_dry_run || {
# note: this script will not be executed, so do not chmod.
if test "x$build" = "x$host"; then
$cwrapper --lt-dump-script > $func_ltwrapper_scriptname_result
else
func_emit_wrapper no > $func_ltwrapper_scriptname_result
fi
}
;;
* )
$RM $output
trap "$RM $output; exit $EXIT_FAILURE" 1 2 15
func_emit_wrapper no > $output
chmod +x $output
;;
esac
}
exit $EXIT_SUCCESS
;;
esac
# See if we need to build an old-fashioned archive.
for oldlib in $oldlibs; do
case $build_libtool_libs in
convenience)
oldobjs="$libobjs_save $symfileobj"
addlibs=$convenience
build_libtool_libs=no
;;
module)
oldobjs=$libobjs_save
addlibs=$old_convenience
build_libtool_libs=no
;;
*)
oldobjs="$old_deplibs $non_pic_objects"
$preload && test -f "$symfileobj" \
&& func_append oldobjs " $symfileobj"
addlibs=$old_convenience
;;
esac
if test -n "$addlibs"; then
gentop=$output_objdir/${outputname}x
func_append generated " $gentop"
func_extract_archives $gentop $addlibs
func_append oldobjs " $func_extract_archives_result"
fi
# Do each command in the archive commands.
if test -n "$old_archive_from_new_cmds" && test yes = "$build_libtool_libs"; then
cmds=$old_archive_from_new_cmds
else
# Add any objects from preloaded convenience libraries
if test -n "$dlprefiles"; then
gentop=$output_objdir/${outputname}x
func_append generated " $gentop"
func_extract_archives $gentop $dlprefiles
func_append oldobjs " $func_extract_archives_result"
fi
# POSIX demands no paths to be encoded in archives. We have
# to avoid creating archives with duplicate basenames if we
# might have to extract them afterwards, e.g., when creating a
# static archive out of a convenience library, or when linking
# the entirety of a libtool archive into another (currently
# not supported by libtool).
if (for obj in $oldobjs
do
func_basename "$obj"
$ECHO "$func_basename_result"
done | sort | sort -uc >/dev/null 2>&1); then
:
else
echo "copying selected object files to avoid basename conflicts..."
gentop=$output_objdir/${outputname}x
func_append generated " $gentop"
func_mkdir_p "$gentop"
save_oldobjs=$oldobjs
oldobjs=
counter=1
for obj in $save_oldobjs
do
func_basename "$obj"
objbase=$func_basename_result
case " $oldobjs " in
" ") oldobjs=$obj ;;
*[\ /]"$objbase "*)
while :; do
# Make sure we don't pick an alternate name that also
# overlaps.
newobj=lt$counter-$objbase
func_arith $counter + 1
counter=$func_arith_result
case " $oldobjs " in
*[\ /]"$newobj "*) ;;
*) if test ! -f "$gentop/$newobj"; then break; fi ;;
esac
done
func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj"
func_append oldobjs " $gentop/$newobj"
;;
*) func_append oldobjs " $obj" ;;
esac
done
fi
func_to_tool_file "$oldlib" func_convert_file_msys_to_w32
tool_oldlib=$func_to_tool_file_result
eval cmds=\"$old_archive_cmds\"
func_len " $cmds"
len=$func_len_result
if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
cmds=$old_archive_cmds
elif test -n "$archiver_list_spec"; then
func_verbose "using command file archive linking..."
for obj in $oldobjs
do
func_to_tool_file "$obj"
$ECHO "$func_to_tool_file_result"
done > $output_objdir/$libname.libcmd
func_to_tool_file "$output_objdir/$libname.libcmd"
oldobjs=" $archiver_list_spec$func_to_tool_file_result"
cmds=$old_archive_cmds
else
# the command line is too long to link in one step, link in parts
func_verbose "using piecewise archive linking..."
save_RANLIB=$RANLIB
RANLIB=:
objlist=
concat_cmds=
save_oldobjs=$oldobjs
oldobjs=
# Is there a better way of finding the last object in the list?
for obj in $save_oldobjs
do
last_oldobj=$obj
done
eval test_cmds=\"$old_archive_cmds\"
func_len " $test_cmds"
len0=$func_len_result
len=$len0
for obj in $save_oldobjs
do
func_len " $obj"
func_arith $len + $func_len_result
len=$func_arith_result
func_append objlist " $obj"
if test "$len" -lt "$max_cmd_len"; then
:
else
# the above command should be used before it gets too long
oldobjs=$objlist
if test "$obj" = "$last_oldobj"; then
RANLIB=$save_RANLIB
fi
test -z "$concat_cmds" || concat_cmds=$concat_cmds~
eval concat_cmds=\"\$concat_cmds$old_archive_cmds\"
objlist=
len=$len0
fi
done
RANLIB=$save_RANLIB
oldobjs=$objlist
if test -z "$oldobjs"; then
eval cmds=\"\$concat_cmds\"
else
eval cmds=\"\$concat_cmds~\$old_archive_cmds\"
fi
fi
fi
func_execute_cmds "$cmds" 'exit $?'
done
test -n "$generated" && \
func_show_eval "${RM}r$generated"
# Now create the libtool archive.
case $output in
*.la)
old_library=
test yes = "$build_old_libs" && old_library=$libname.$libext
func_verbose "creating $output"
# Preserve any variables that may affect compiler behavior
for var in $variables_saved_for_relink; do
if eval test -z \"\${$var+set}\"; then
relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command"
elif eval var_value=\$$var; test -z "$var_value"; then
relink_command="$var=; export $var; $relink_command"
else
func_quote_for_eval "$var_value"
relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command"
fi
done
# Quote the link command for shipping.
relink_command="(cd `pwd`; $SHELL \"$progpath\" $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)"
relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"`
if test yes = "$hardcode_automatic"; then
relink_command=
fi
# Only create the output if not a dry run.
$opt_dry_run || {
for installed in no yes; do
if test yes = "$installed"; then
if test -z "$install_libdir"; then
break
fi
output=$output_objdir/${outputname}i
# Replace all uninstalled libtool libraries with the installed ones
newdependency_libs=
for deplib in $dependency_libs; do
case $deplib in
*.la)
func_basename "$deplib"
name=$func_basename_result
func_resolve_sysroot "$deplib"
eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
test -z "$libdir" && \
func_fatal_error "'$deplib' is not a valid libtool archive"
func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name"
;;
-L*)
func_stripname -L '' "$deplib"
func_replace_sysroot "$func_stripname_result"
func_append newdependency_libs " -L$func_replace_sysroot_result"
;;
-R*)
func_stripname -R '' "$deplib"
func_replace_sysroot "$func_stripname_result"
func_append newdependency_libs " -R$func_replace_sysroot_result"
;;
*) func_append newdependency_libs " $deplib" ;;
esac
done
dependency_libs=$newdependency_libs
newdlfiles=
for lib in $dlfiles; do
case $lib in
*.la)
func_basename "$lib"
name=$func_basename_result
eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
test -z "$libdir" && \
func_fatal_error "'$lib' is not a valid libtool archive"
func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name"
;;
*) func_append newdlfiles " $lib" ;;
esac
done
dlfiles=$newdlfiles
newdlprefiles=
for lib in $dlprefiles; do
case $lib in
*.la)
# Only pass preopened files to the pseudo-archive (for
# eventual linking with the app. that links it) if we
# didn't already link the preopened objects directly into
# the library:
func_basename "$lib"
name=$func_basename_result
eval libdir=`$SED -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
test -z "$libdir" && \
func_fatal_error "'$lib' is not a valid libtool archive"
func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name"
;;
esac
done
dlprefiles=$newdlprefiles
else
newdlfiles=
for lib in $dlfiles; do
case $lib in
[\\/]* | [A-Za-z]:[\\/]*) abs=$lib ;;
*) abs=`pwd`"/$lib" ;;
esac
func_append newdlfiles " $abs"
done
dlfiles=$newdlfiles
newdlprefiles=
for lib in $dlprefiles; do
case $lib in
[\\/]* | [A-Za-z]:[\\/]*) abs=$lib ;;
*) abs=`pwd`"/$lib" ;;
esac
func_append newdlprefiles " $abs"
done
dlprefiles=$newdlprefiles
fi
$RM $output
# place dlname in correct position for cygwin
# In fact, it would be nice if we could use this code for all target
# systems that can't hard-code library paths into their executables
# and that have no shared library path variable independent of PATH,
# but it turns out we can't easily determine that from inspecting
# libtool variables, so we have to hard-code the OSs to which it
# applies here; at the moment, that means platforms that use the PE
# object format with DLL files. See the long comment at the top of
# tests/bindir.at for full details.
tdlname=$dlname
case $host,$output,$installed,$module,$dlname in
*cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll | *cegcc*,*lai,yes,no,*.dll)
# If a -bindir argument was supplied, place the dll there.
if test -n "$bindir"; then
func_relative_path "$install_libdir" "$bindir"
tdlname=$func_relative_path_result/$dlname
else
# Otherwise fall back on heuristic.
tdlname=../bin/$dlname
fi
;;
esac
$ECHO > $output "\
# $outputname - a libtool library file
# Generated by $PROGRAM (GNU $PACKAGE) $VERSION
#
# Please DO NOT delete this file!
# It is necessary for linking the library.
# The name that we can dlopen(3).
dlname='$tdlname'
# Names of this library.
library_names='$library_names'
# The name of the static archive.
old_library='$old_library'
# Linker flags that cannot go in dependency_libs.
inherited_linker_flags='$new_inherited_linker_flags'
# Libraries that this one depends upon.
dependency_libs='$dependency_libs'
# Names of additional weak libraries provided by this library
weak_library_names='$weak_libs'
# Version information for $libname.
current=$current
age=$age
revision=$revision
# Is this an already installed library?
installed=$installed
# Should we warn about portability when linking against -modules?
shouldnotlink=$module
# Files to dlopen/dlpreopen
dlopen='$dlfiles'
dlpreopen='$dlprefiles'
# Directory that this library needs to be installed in:
libdir='$install_libdir'"
if test no,yes = "$installed,$need_relink"; then
$ECHO >> $output "\
relink_command=\"$relink_command\""
fi
done
}
# Do a symbolic link so that the libtool archive can be found in
# LD_LIBRARY_PATH before the program is installed.
func_show_eval '( cd "$output_objdir" && $RM "$outputname" && $LN_S "../$outputname" "$outputname" )' 'exit $?'
;;
esac
exit $EXIT_SUCCESS
}
if test link = "$opt_mode" || test relink = "$opt_mode"; then
func_mode_link ${1+"$@"}
fi
# func_mode_uninstall arg...
func_mode_uninstall ()
{
$debug_cmd
RM=$nonopt
files=
rmforce=false
exit_status=0
# This variable tells wrapper scripts just to set variables rather
# than running their programs.
libtool_install_magic=$magic
for arg
do
case $arg in
-f) func_append RM " $arg"; rmforce=: ;;
-*) func_append RM " $arg" ;;
*) func_append files " $arg" ;;
esac
done
test -z "$RM" && \
func_fatal_help "you must specify an RM program"
rmdirs=
for file in $files; do
func_dirname "$file" "" "."
dir=$func_dirname_result
if test . = "$dir"; then
odir=$objdir
else
odir=$dir/$objdir
fi
func_basename "$file"
name=$func_basename_result
test uninstall = "$opt_mode" && odir=$dir
# Remember odir for removal later, being careful to avoid duplicates
if test clean = "$opt_mode"; then
case " $rmdirs " in
*" $odir "*) ;;
*) func_append rmdirs " $odir" ;;
esac
fi
# Don't error if the file doesn't exist and rm -f was used.
if { test -L "$file"; } >/dev/null 2>&1 ||
{ test -h "$file"; } >/dev/null 2>&1 ||
test -f "$file"; then
:
elif test -d "$file"; then
exit_status=1
continue
elif $rmforce; then
continue
fi
rmfiles=$file
case $name in
*.la)
# Possibly a libtool archive, so verify it.
if func_lalib_p "$file"; then
func_source $dir/$name
# Delete the libtool libraries and symlinks.
for n in $library_names; do
func_append rmfiles " $odir/$n"
done
test -n "$old_library" && func_append rmfiles " $odir/$old_library"
case $opt_mode in
clean)
case " $library_names " in
*" $dlname "*) ;;
*) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;;
esac
test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i"
;;
uninstall)
if test -n "$library_names"; then
# Do each command in the postuninstall commands.
func_execute_cmds "$postuninstall_cmds" '$rmforce || exit_status=1'
fi
if test -n "$old_library"; then
# Do each command in the old_postuninstall commands.
func_execute_cmds "$old_postuninstall_cmds" '$rmforce || exit_status=1'
fi
# FIXME: should reinstall the best remaining shared library.
;;
esac
fi
;;
*.lo)
# Possibly a libtool object, so verify it.
if func_lalib_p "$file"; then
# Read the .lo file
func_source $dir/$name
# Add PIC object to the list of files to remove.
if test -n "$pic_object" && test none != "$pic_object"; then
func_append rmfiles " $dir/$pic_object"
fi
# Add non-PIC object to the list of files to remove.
if test -n "$non_pic_object" && test none != "$non_pic_object"; then
func_append rmfiles " $dir/$non_pic_object"
fi
fi
;;
*)
if test clean = "$opt_mode"; then
noexename=$name
case $file in
*.exe)
func_stripname '' '.exe' "$file"
file=$func_stripname_result
func_stripname '' '.exe' "$name"
noexename=$func_stripname_result
# $file with .exe has already been added to rmfiles,
# add $file without .exe
func_append rmfiles " $file"
;;
esac
# Do a test to see if this is a libtool program.
if func_ltwrapper_p "$file"; then
if func_ltwrapper_executable_p "$file"; then
func_ltwrapper_scriptname "$file"
relink_command=
func_source $func_ltwrapper_scriptname_result
func_append rmfiles " $func_ltwrapper_scriptname_result"
else
relink_command=
func_source $dir/$noexename
fi
# note $name still contains .exe if it was in $file originally
# as does the version of $file that was added into $rmfiles
func_append rmfiles " $odir/$name $odir/${name}S.$objext"
if test yes = "$fast_install" && test -n "$relink_command"; then
func_append rmfiles " $odir/lt-$name"
fi
if test "X$noexename" != "X$name"; then
func_append rmfiles " $odir/lt-$noexename.c"
fi
fi
fi
;;
esac
func_show_eval "$RM $rmfiles" 'exit_status=1'
done
# Try to remove the $objdir's in the directories where we deleted files
for dir in $rmdirs; do
if test -d "$dir"; then
func_show_eval "rmdir $dir >/dev/null 2>&1"
fi
done
exit $exit_status
}
if test uninstall = "$opt_mode" || test clean = "$opt_mode"; then
func_mode_uninstall ${1+"$@"}
fi
test -z "$opt_mode" && {
help=$generic_help
func_fatal_help "you must specify a MODE"
}
test -z "$exec_cmd" && \
func_fatal_help "invalid operation mode '$opt_mode'"
if test -n "$exec_cmd"; then
eval exec "$exec_cmd"
exit $EXIT_FAILURE
fi
exit $exit_status
# The TAGs below are defined such that we never get into a situation
# where we disable both kinds of libraries. Given conflicting
# choices, we go for a static library, that is the most portable,
# since we can't tell whether shared libraries were disabled because
# the user asked for that or because the platform doesn't support
# them. This is particularly important on AIX, because we don't
# support having both static and shared libraries enabled at the same
# time on that platform, so we default to a shared-only configuration.
# If a disable-shared tag is given, we'll fallback to a static-only
# configuration. But we'll never go from static-only to shared-only.
# ### BEGIN LIBTOOL TAG CONFIG: disable-shared
build_libtool_libs=no
build_old_libs=yes
# ### END LIBTOOL TAG CONFIG: disable-shared
# ### BEGIN LIBTOOL TAG CONFIG: disable-static
build_old_libs=`case $build_libtool_libs in yes) echo no;; *) echo yes;; esac`
# ### END LIBTOOL TAG CONFIG: disable-static
# Local Variables:
# mode:shell-script
# sh-indentation:2
# End:
liblognorm-2.0.6/depcomp 0000755 0001750 0001750 00000056017 13370251155 012174 0000000 0000000 #! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2016-01-11.22; # UTC
# Copyright (C) 1999-2017 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see .
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva .
case $1 in
'')
echo "$0: No command. Try '$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by 'PROGRAMS ARGS'.
object Object file output by 'PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputting dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to .
EOF
exit $?
;;
-v | --v*)
echo "depcomp $scriptversion"
exit $?
;;
esac
# Get the directory component of the given path, and save it in the
# global variables '$dir'. Note that this directory component will
# be either empty or ending with a '/' character. This is deliberate.
set_dir_from ()
{
case $1 in
*/*) dir=`echo "$1" | sed -e 's|/[^/]*$|/|'`;;
*) dir=;;
esac
}
# Get the suffix-stripped basename of the given path, and save it the
# global variable '$base'.
set_base_from ()
{
base=`echo "$1" | sed -e 's|^.*/||' -e 's/\.[^.]*$//'`
}
# If no dependency file was actually created by the compiler invocation,
# we still have to create a dummy depfile, to avoid errors with the
# Makefile "include basename.Plo" scheme.
make_dummy_depfile ()
{
echo "#dummy" > "$depfile"
}
# Factor out some common post-processing of the generated depfile.
# Requires the auxiliary global variable '$tmpdepfile' to be set.
aix_post_process_depfile ()
{
# If the compiler actually managed to produce a dependency file,
# post-process it.
if test -f "$tmpdepfile"; then
# Each line is of the form 'foo.o: dependency.h'.
# Do two passes, one to just change these to
# $object: dependency.h
# and one to simply output
# dependency.h:
# which is needed to avoid the deleted-header problem.
{ sed -e "s,^.*\.[$lower]*:,$object:," < "$tmpdepfile"
sed -e "s,^.*\.[$lower]*:[$tab ]*,," -e 's,$,:,' < "$tmpdepfile"
} > "$depfile"
rm -f "$tmpdepfile"
else
make_dummy_depfile
fi
}
# A tabulation character.
tab=' '
# A newline character.
nl='
'
# Character ranges might be problematic outside the C locale.
# These definitions help.
upper=ABCDEFGHIJKLMNOPQRSTUVWXYZ
lower=abcdefghijklmnopqrstuvwxyz
digits=0123456789
alpha=${upper}${lower}
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Avoid interferences from the environment.
gccflag= dashmflag=
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
cygpath_u="cygpath -u -f -"
if test "$depmode" = msvcmsys; then
# This is just like msvisualcpp but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvisualcpp
fi
if test "$depmode" = msvc7msys; then
# This is just like msvc7 but w/o cygpath translation.
# Just convert the backslash-escaped backslashes to single forward
# slashes to satisfy depend.m4
cygpath_u='sed s,\\\\,/,g'
depmode=msvc7
fi
if test "$depmode" = xlc; then
# IBM C/C++ Compilers xlc/xlC can output gcc-like dependency information.
gccflag=-qmakedep=gcc,-MF
depmode=gcc
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
## Unfortunately, FreeBSD c89 acceptance of flags depends upon
## the command line argument order; so add the flags where they
## appear in depend2.am. Note that the slowdown incurred here
## affects only configure: in makefiles, %FASTDEP% shortcuts this.
for arg
do
case $arg in
-c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;;
*) set fnord "$@" "$arg" ;;
esac
shift # fnord
shift # $arg
done
"$@"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## Note that this doesn't just cater to obsosete pre-3.x GCC compilers.
## but also to in-use compilers like IMB xlc/xlC and the HP C compiler.
## (see the conditional assignment to $gccflag above).
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say). Also, it might not be
## supported by the other compilers which use the 'gcc' depmode.
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The second -e expression handles DOS-style file names with drive
# letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the "deleted header file" problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
## Some versions of gcc put a space before the ':'. On the theory
## that the space means something, we add a space to the output as
## well. hp depmode also adds that space, but also prefixes the VPATH
## to the object. Take care to not repeat it in the output.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like '#:fec' to the end of the
# dependency line.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' \
| tr "$nl" ' ' >> "$depfile"
echo >> "$depfile"
# The second pass generates a dummy entry for each header file.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile"
;;
xlc)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts '$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.u
tmpdepfile2=$base.u
tmpdepfile3=$dir.libs/$base.u
"$@" -Wc,-M
else
tmpdepfile1=$dir$base.u
tmpdepfile2=$dir$base.u
tmpdepfile3=$dir$base.u
"$@" -M
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
aix_post_process_depfile
;;
tcc)
# tcc (Tiny C Compiler) understand '-MD -MF file' since version 0.9.26
# FIXME: That version still under development at the moment of writing.
# Make that this statement remains true also for stable, released
# versions.
# It will wrap lines (doesn't matter whether long or short) with a
# trailing '\', as in:
#
# foo.o : \
# foo.c \
# foo.h \
#
# It will put a trailing '\' even on the last line, and will use leading
# spaces rather than leading tabs (at least since its commit 0394caf7
# "Emit spaces for -MD").
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each non-empty line is of the form 'foo.o : \' or ' dep.h \'.
# We have to change lines of the first kind to '$object: \'.
sed -e "s|.*:|$object :|" < "$tmpdepfile" > "$depfile"
# And for each line of the second kind, we have to emit a 'dep.h:'
# dummy dependency, to avoid the deleted-header problem.
sed -n -e 's|^ *\(.*\) *\\$|\1:|p' < "$tmpdepfile" >> "$depfile"
rm -f "$tmpdepfile"
;;
## The order of this option in the case statement is important, since the
## shell code in configure will try each of these formats in the order
## listed in this file. A plain '-MD' option would be understood by many
## compilers, so we must ensure this comes after the gcc and icc options.
pgcc)
# Portland's C compiler understands '-MD'.
# Will always output deps to 'file.d' where file is the root name of the
# source file under compilation, even if file resides in a subdirectory.
# The object file name does not affect the name of the '.d' file.
# pgcc 10.2 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using '\' :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
set_dir_from "$object"
# Use the source, not the object, to determine the base name, since
# that's sadly what pgcc will do too.
set_base_from "$source"
tmpdepfile=$base.d
# For projects that build the same source file twice into different object
# files, the pgcc approach of using the *source* file root name can cause
# problems in parallel builds. Use a locking strategy to avoid stomping on
# the same $tmpdepfile.
lockdir=$base.d-lock
trap "
echo '$0: caught signal, cleaning up...' >&2
rmdir '$lockdir'
exit 1
" 1 2 13 15
numtries=100
i=$numtries
while test $i -gt 0; do
# mkdir is a portable test-and-set.
if mkdir "$lockdir" 2>/dev/null; then
# This process acquired the lock.
"$@" -MD
stat=$?
# Release the lock.
rmdir "$lockdir"
break
else
# If the lock is being held by a different process, wait
# until the winning process is done or we timeout.
while test -d "$lockdir" && test $i -gt 0; do
sleep 1
i=`expr $i - 1`
done
fi
i=`expr $i - 1`
done
trap - 1 2 13 15
if test $i -le 0; then
echo "$0: failed to acquire lock after $numtries attempts" >&2
echo "$0: check lockdir '$lockdir'" >&2
exit 1
fi
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp2)
# The "hp" stanza above does not work with aCC (C++) and HP's ia64
# compilers, which have integrated preprocessors. The correct option
# to use with these is +Maked; it writes dependencies to a file named
# 'foo.d', which lands next to the object file, wherever that
# happens to be.
# Much of this is similar to the tru64 case; see comments there.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir.libs/$base.d
"$@" -Wc,+Maked
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
"$@" +Maked
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2"
do
test -f "$tmpdepfile" && break
done
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[$lower]*:,$object:," "$tmpdepfile" > "$depfile"
# Add 'dependent.h:' lines.
sed -ne '2,${
s/^ *//
s/ \\*$//
s/$/:/
p
}' "$tmpdepfile" >> "$depfile"
else
make_dummy_depfile
fi
rm -f "$tmpdepfile" "$tmpdepfile2"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. 'cc -MD -o foo.o ...' puts the dependencies into 'foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in 'foo.d' instead, so we check for that too.
# Subdirectories are respected.
set_dir_from "$object"
set_base_from "$object"
if test "$libtool" = yes; then
# Libtool generates 2 separate objects for the 2 libraries. These
# two compilations output dependencies in $dir.libs/$base.o.d and
# in $dir$base.o.d. We have to check for both files, because
# one of the two compilations can be disabled. We should prefer
# $dir$base.o.d over $dir.libs/$base.o.d because the latter is
# automatically cleaned when .libs/ is deleted, while ignoring
# the former would cause a distcleancheck panic.
tmpdepfile1=$dir$base.o.d # libtool 1.5
tmpdepfile2=$dir.libs/$base.o.d # Likewise.
tmpdepfile3=$dir.libs/$base.d # Compaq CCC V6.2-504
"$@" -Wc,-MD
else
tmpdepfile1=$dir$base.d
tmpdepfile2=$dir$base.d
tmpdepfile3=$dir$base.d
"$@" -MD
fi
stat=$?
if test $stat -ne 0; then
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
do
test -f "$tmpdepfile" && break
done
# Same post-processing that is required for AIX mode.
aix_post_process_depfile
;;
msvc7)
if test "$libtool" = yes; then
showIncludes=-Wc,-showIncludes
else
showIncludes=-showIncludes
fi
"$@" $showIncludes > "$tmpdepfile"
stat=$?
grep -v '^Note: including file: ' "$tmpdepfile"
if test $stat -ne 0; then
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
# The first sed program below extracts the file names and escapes
# backslashes for cygpath. The second sed program outputs the file
# name when reading, but also accumulates all include files in the
# hold buffer in order to output them again at the end. This only
# works with sed implementations that can handle large buffers.
sed < "$tmpdepfile" -n '
/^Note: including file: *\(.*\)/ {
s//\1/
s/\\/\\\\/g
p
}' | $cygpath_u | sort -u | sed -n '
s/ /\\ /g
s/\(.*\)/'"$tab"'\1 \\/p
s/.\(.*\) \\/\1:/
H
$ {
s/.*/'"$tab"'/
G
p
}' >> "$depfile"
echo >> "$depfile" # make sure the fragment doesn't end with a backslash
rm -f "$tmpdepfile"
;;
msvc7msys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for ':'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as 'c:/foo/bar' could be seen as target 'c' otherwise.
"$@" $dashmflag |
sed "s|^[$tab ]*[^:$tab ][^:][^:]*:[$tab ]*|$object: |" > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this sed invocation
# correctly. Breaking it into two sed invocations is a workaround.
tr ' ' "$nl" < "$tmpdepfile" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no eat=no
for arg
do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
if test $eat = yes; then
eat=no
continue
fi
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-arch)
eat=yes ;;
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix=`echo "$object" | sed 's/^.*\././'`
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
# makedepend may prepend the VPATH from the source file name to the object.
# No need to regex-escape $object, excess matching of '.' is harmless.
sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process the last invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed '1,2d' "$tmpdepfile" \
| tr ' ' "$nl" \
| sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' \
| sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
# Remove '-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E \
| sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
-e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \
| sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test "X$1" != 'X--mode=compile'; do
shift
done
shift
fi
IFS=" "
for arg
do
case "$arg" in
-o)
shift
;;
$object)
shift
;;
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E 2>/dev/null |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::'"$tab"'\1 \\:p' >> "$depfile"
echo "$tab" >> "$depfile"
sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvcmsys)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-time-zone: "UTC0"
# time-stamp-end: "; # UTC"
# End:
liblognorm-2.0.6/AUTHORS 0000644 0001750 0001750 00000000066 13273030617 011660 0000000 0000000 Rainer Gerhards , Adiscon GmbH
liblognorm-2.0.6/config.sub 0000755 0001750 0001750 00000106450 13370251154 012576 0000000 0000000 #! /bin/sh
# Configuration validation subroutine script.
# Copyright 1992-2018 Free Software Foundation, Inc.
timestamp='2018-02-22'
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, see .
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that
# program. This Exception is an additional permission under section 7
# of the GNU General Public License, version 3 ("GPLv3").
# Please send patches to .
#
# Configuration subroutine to validate and canonicalize a configuration type.
# Supply the specified configuration type as an argument.
# If it is invalid, we print an error message on stderr and exit with code 1.
# Otherwise, we print the canonical config type on stdout and succeed.
# You can get the latest version of this script from:
# https://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub
# This file is supposed to be the same for all GNU packages
# and recognize all the CPU types, system types and aliases
# that are meaningful with *any* GNU software.
# Each package is responsible for reporting which valid configurations
# it does not support. The user should be able to distinguish
# a failure to support a valid configuration from a meaningless
# configuration.
# The goal of this file is to map all the various variations of a given
# machine specification into a single specification in the form:
# CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM
# or in some cases, the newer four-part form:
# CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM
# It is wrong to echo any other type of specification.
me=`echo "$0" | sed -e 's,.*/,,'`
usage="\
Usage: $0 [OPTION] CPU-MFR-OPSYS or ALIAS
Canonicalize a configuration name.
Options:
-h, --help print this help, then exit
-t, --time-stamp print date of last modification, then exit
-v, --version print version number, then exit
Report bugs and patches to ."
version="\
GNU config.sub ($timestamp)
Copyright 1992-2018 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE."
help="
Try \`$me --help' for more information."
# Parse command line
while test $# -gt 0 ; do
case $1 in
--time-stamp | --time* | -t )
echo "$timestamp" ; exit ;;
--version | -v )
echo "$version" ; exit ;;
--help | --h* | -h )
echo "$usage"; exit ;;
-- ) # Stop option processing
shift; break ;;
- ) # Use stdin as input.
break ;;
-* )
echo "$me: invalid option $1$help"
exit 1 ;;
*local*)
# First pass through any local machine types.
echo "$1"
exit ;;
* )
break ;;
esac
done
case $# in
0) echo "$me: missing argument$help" >&2
exit 1;;
1) ;;
*) echo "$me: too many arguments$help" >&2
exit 1;;
esac
# Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any).
# Here we must recognize all the valid KERNEL-OS combinations.
maybe_os=`echo "$1" | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'`
case $maybe_os in
nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \
linux-musl* | linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \
knetbsd*-gnu* | netbsd*-gnu* | netbsd*-eabi* | \
kopensolaris*-gnu* | cloudabi*-eabi* | \
storm-chaos* | os2-emx* | rtmk-nova*)
os=-$maybe_os
basic_machine=`echo "$1" | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`
;;
android-linux)
os=-linux-android
basic_machine=`echo "$1" | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`-unknown
;;
*)
basic_machine=`echo "$1" | sed 's/-[^-]*$//'`
if [ "$basic_machine" != "$1" ]
then os=`echo "$1" | sed 's/.*-/-/'`
else os=; fi
;;
esac
### Let's recognize common machines as not being operating systems so
### that things like config.sub decstation-3100 work. We also
### recognize some manufacturers as not being operating systems, so we
### can provide default operating systems below.
case $os in
-sun*os*)
# Prevent following clause from handling this invalid input.
;;
-dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \
-att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \
-unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \
-convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\
-c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \
-harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \
-apple | -axis | -knuth | -cray | -microblaze*)
os=
basic_machine=$1
;;
-bluegene*)
os=-cnk
;;
-sim | -cisco | -oki | -wec | -winbond)
os=
basic_machine=$1
;;
-scout)
;;
-wrs)
os=-vxworks
basic_machine=$1
;;
-chorusos*)
os=-chorusos
basic_machine=$1
;;
-chorusrdb)
os=-chorusrdb
basic_machine=$1
;;
-hiux*)
os=-hiuxwe2
;;
-sco6)
os=-sco5v6
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-sco5)
os=-sco3.2v5
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-sco4)
os=-sco3.2v4
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-sco3.2.[4-9]*)
os=`echo $os | sed -e 's/sco3.2./sco3.2v/'`
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-sco3.2v[4-9]*)
# Don't forget version if it is 3.2v4 or newer.
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-sco5v6*)
# Don't forget version if it is 3.2v4 or newer.
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-sco*)
os=-sco3.2v2
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-udk*)
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-isc)
os=-isc2.2
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-clix*)
basic_machine=clipper-intergraph
;;
-isc*)
basic_machine=`echo "$1" | sed -e 's/86-.*/86-pc/'`
;;
-lynx*178)
os=-lynxos178
;;
-lynx*5)
os=-lynxos5
;;
-lynx*)
os=-lynxos
;;
-ptx*)
basic_machine=`echo "$1" | sed -e 's/86-.*/86-sequent/'`
;;
-psos*)
os=-psos
;;
-mint | -mint[0-9]*)
basic_machine=m68k-atari
os=-mint
;;
esac
# Decode aliases for certain CPU-COMPANY combinations.
case $basic_machine in
# Recognize the basic CPU types without company name.
# Some are omitted here because they have special meanings below.
1750a | 580 \
| a29k \
| aarch64 | aarch64_be \
| alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \
| alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \
| am33_2.0 \
| arc | arceb \
| arm | arm[bl]e | arme[lb] | armv[2-8] | armv[3-8][lb] | armv7[arm] \
| avr | avr32 \
| ba \
| be32 | be64 \
| bfin \
| c4x | c8051 | clipper \
| d10v | d30v | dlx | dsp16xx \
| e2k | epiphany \
| fido | fr30 | frv | ft32 \
| h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \
| hexagon \
| i370 | i860 | i960 | ia16 | ia64 \
| ip2k | iq2000 \
| k1om \
| le32 | le64 \
| lm32 \
| m32c | m32r | m32rle | m68000 | m68k | m88k \
| maxq | mb | microblaze | microblazeel | mcore | mep | metag \
| mips | mipsbe | mipseb | mipsel | mipsle \
| mips16 \
| mips64 | mips64el \
| mips64octeon | mips64octeonel \
| mips64orion | mips64orionel \
| mips64r5900 | mips64r5900el \
| mips64vr | mips64vrel \
| mips64vr4100 | mips64vr4100el \
| mips64vr4300 | mips64vr4300el \
| mips64vr5000 | mips64vr5000el \
| mips64vr5900 | mips64vr5900el \
| mipsisa32 | mipsisa32el \
| mipsisa32r2 | mipsisa32r2el \
| mipsisa32r6 | mipsisa32r6el \
| mipsisa64 | mipsisa64el \
| mipsisa64r2 | mipsisa64r2el \
| mipsisa64r6 | mipsisa64r6el \
| mipsisa64sb1 | mipsisa64sb1el \
| mipsisa64sr71k | mipsisa64sr71kel \
| mipsr5900 | mipsr5900el \
| mipstx39 | mipstx39el \
| mn10200 | mn10300 \
| moxie \
| mt \
| msp430 \
| nds32 | nds32le | nds32be \
| nios | nios2 | nios2eb | nios2el \
| ns16k | ns32k \
| open8 | or1k | or1knd | or32 \
| pdp10 | pj | pjl \
| powerpc | powerpc64 | powerpc64le | powerpcle \
| pru \
| pyramid \
| riscv32 | riscv64 \
| rl78 | rx \
| score \
| sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[234]eb | sheb | shbe | shle | sh[1234]le | sh3ele \
| sh64 | sh64le \
| sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \
| sparcv8 | sparcv9 | sparcv9b | sparcv9v \
| spu \
| tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \
| ubicom32 \
| v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \
| visium \
| wasm32 \
| x86 | xc16x | xstormy16 | xtensa \
| z8k | z80)
basic_machine=$basic_machine-unknown
;;
c54x)
basic_machine=tic54x-unknown
;;
c55x)
basic_machine=tic55x-unknown
;;
c6x)
basic_machine=tic6x-unknown
;;
leon|leon[3-9])
basic_machine=sparc-$basic_machine
;;
m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | nvptx | picochip)
basic_machine=$basic_machine-unknown
os=-none
;;
m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65)
;;
ms1)
basic_machine=mt-unknown
;;
strongarm | thumb | xscale)
basic_machine=arm-unknown
;;
xgate)
basic_machine=$basic_machine-unknown
os=-none
;;
xscaleeb)
basic_machine=armeb-unknown
;;
xscaleel)
basic_machine=armel-unknown
;;
# We use `pc' rather than `unknown'
# because (1) that's what they normally are, and
# (2) the word "unknown" tends to confuse beginning users.
i*86 | x86_64)
basic_machine=$basic_machine-pc
;;
# Object if more than one company name word.
*-*-*)
echo Invalid configuration \`"$1"\': machine \`"$basic_machine"\' not recognized 1>&2
exit 1
;;
# Recognize the basic CPU types with company name.
580-* \
| a29k-* \
| aarch64-* | aarch64_be-* \
| alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \
| alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \
| alphapca5[67]-* | alpha64pca5[67]-* | arc-* | arceb-* \
| arm-* | armbe-* | armle-* | armeb-* | armv*-* \
| avr-* | avr32-* \
| ba-* \
| be32-* | be64-* \
| bfin-* | bs2000-* \
| c[123]* | c30-* | [cjt]90-* | c4x-* \
| c8051-* | clipper-* | craynv-* | cydra-* \
| d10v-* | d30v-* | dlx-* \
| e2k-* | elxsi-* \
| f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \
| h8300-* | h8500-* \
| hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \
| hexagon-* \
| i*86-* | i860-* | i960-* | ia16-* | ia64-* \
| ip2k-* | iq2000-* \
| k1om-* \
| le32-* | le64-* \
| lm32-* \
| m32c-* | m32r-* | m32rle-* \
| m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \
| m88110-* | m88k-* | maxq-* | mcore-* | metag-* \
| microblaze-* | microblazeel-* \
| mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \
| mips16-* \
| mips64-* | mips64el-* \
| mips64octeon-* | mips64octeonel-* \
| mips64orion-* | mips64orionel-* \
| mips64r5900-* | mips64r5900el-* \
| mips64vr-* | mips64vrel-* \
| mips64vr4100-* | mips64vr4100el-* \
| mips64vr4300-* | mips64vr4300el-* \
| mips64vr5000-* | mips64vr5000el-* \
| mips64vr5900-* | mips64vr5900el-* \
| mipsisa32-* | mipsisa32el-* \
| mipsisa32r2-* | mipsisa32r2el-* \
| mipsisa32r6-* | mipsisa32r6el-* \
| mipsisa64-* | mipsisa64el-* \
| mipsisa64r2-* | mipsisa64r2el-* \
| mipsisa64r6-* | mipsisa64r6el-* \
| mipsisa64sb1-* | mipsisa64sb1el-* \
| mipsisa64sr71k-* | mipsisa64sr71kel-* \
| mipsr5900-* | mipsr5900el-* \
| mipstx39-* | mipstx39el-* \
| mmix-* \
| mt-* \
| msp430-* \
| nds32-* | nds32le-* | nds32be-* \
| nios-* | nios2-* | nios2eb-* | nios2el-* \
| none-* | np1-* | ns16k-* | ns32k-* \
| open8-* \
| or1k*-* \
| orion-* \
| pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \
| powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \
| pru-* \
| pyramid-* \
| riscv32-* | riscv64-* \
| rl78-* | romp-* | rs6000-* | rx-* \
| sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \
| shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \
| sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \
| sparclite-* \
| sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx*-* \
| tahoe-* \
| tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \
| tile*-* \
| tron-* \
| ubicom32-* \
| v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \
| vax-* \
| visium-* \
| wasm32-* \
| we32k-* \
| x86-* | x86_64-* | xc16x-* | xps100-* \
| xstormy16-* | xtensa*-* \
| ymp-* \
| z8k-* | z80-*)
;;
# Recognize the basic CPU types without company name, with glob match.
xtensa*)
basic_machine=$basic_machine-unknown
;;
# Recognize the various machine names and aliases which stand
# for a CPU type and a company and sometimes even an OS.
386bsd)
basic_machine=i386-pc
os=-bsd
;;
3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc)
basic_machine=m68000-att
;;
3b*)
basic_machine=we32k-att
;;
a29khif)
basic_machine=a29k-amd
os=-udi
;;
abacus)
basic_machine=abacus-unknown
;;
adobe68k)
basic_machine=m68010-adobe
os=-scout
;;
alliant | fx80)
basic_machine=fx80-alliant
;;
altos | altos3068)
basic_machine=m68k-altos
;;
am29k)
basic_machine=a29k-none
os=-bsd
;;
amd64)
basic_machine=x86_64-pc
;;
amd64-*)
basic_machine=x86_64-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
amdahl)
basic_machine=580-amdahl
os=-sysv
;;
amiga | amiga-*)
basic_machine=m68k-unknown
;;
amigaos | amigados)
basic_machine=m68k-unknown
os=-amigaos
;;
amigaunix | amix)
basic_machine=m68k-unknown
os=-sysv4
;;
apollo68)
basic_machine=m68k-apollo
os=-sysv
;;
apollo68bsd)
basic_machine=m68k-apollo
os=-bsd
;;
aros)
basic_machine=i386-pc
os=-aros
;;
asmjs)
basic_machine=asmjs-unknown
;;
aux)
basic_machine=m68k-apple
os=-aux
;;
balance)
basic_machine=ns32k-sequent
os=-dynix
;;
blackfin)
basic_machine=bfin-unknown
os=-linux
;;
blackfin-*)
basic_machine=bfin-`echo "$basic_machine" | sed 's/^[^-]*-//'`
os=-linux
;;
bluegene*)
basic_machine=powerpc-ibm
os=-cnk
;;
c54x-*)
basic_machine=tic54x-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
c55x-*)
basic_machine=tic55x-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
c6x-*)
basic_machine=tic6x-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
c90)
basic_machine=c90-cray
os=-unicos
;;
cegcc)
basic_machine=arm-unknown
os=-cegcc
;;
convex-c1)
basic_machine=c1-convex
os=-bsd
;;
convex-c2)
basic_machine=c2-convex
os=-bsd
;;
convex-c32)
basic_machine=c32-convex
os=-bsd
;;
convex-c34)
basic_machine=c34-convex
os=-bsd
;;
convex-c38)
basic_machine=c38-convex
os=-bsd
;;
cray | j90)
basic_machine=j90-cray
os=-unicos
;;
craynv)
basic_machine=craynv-cray
os=-unicosmp
;;
cr16 | cr16-*)
basic_machine=cr16-unknown
os=-elf
;;
crds | unos)
basic_machine=m68k-crds
;;
crisv32 | crisv32-* | etraxfs*)
basic_machine=crisv32-axis
;;
cris | cris-* | etrax*)
basic_machine=cris-axis
;;
crx)
basic_machine=crx-unknown
os=-elf
;;
da30 | da30-*)
basic_machine=m68k-da30
;;
decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn)
basic_machine=mips-dec
;;
decsystem10* | dec10*)
basic_machine=pdp10-dec
os=-tops10
;;
decsystem20* | dec20*)
basic_machine=pdp10-dec
os=-tops20
;;
delta | 3300 | motorola-3300 | motorola-delta \
| 3300-motorola | delta-motorola)
basic_machine=m68k-motorola
;;
delta88)
basic_machine=m88k-motorola
os=-sysv3
;;
dicos)
basic_machine=i686-pc
os=-dicos
;;
djgpp)
basic_machine=i586-pc
os=-msdosdjgpp
;;
dpx20 | dpx20-*)
basic_machine=rs6000-bull
os=-bosx
;;
dpx2*)
basic_machine=m68k-bull
os=-sysv3
;;
e500v[12])
basic_machine=powerpc-unknown
os=$os"spe"
;;
e500v[12]-*)
basic_machine=powerpc-`echo "$basic_machine" | sed 's/^[^-]*-//'`
os=$os"spe"
;;
ebmon29k)
basic_machine=a29k-amd
os=-ebmon
;;
elxsi)
basic_machine=elxsi-elxsi
os=-bsd
;;
encore | umax | mmax)
basic_machine=ns32k-encore
;;
es1800 | OSE68k | ose68k | ose | OSE)
basic_machine=m68k-ericsson
os=-ose
;;
fx2800)
basic_machine=i860-alliant
;;
genix)
basic_machine=ns32k-ns
;;
gmicro)
basic_machine=tron-gmicro
os=-sysv
;;
go32)
basic_machine=i386-pc
os=-go32
;;
h3050r* | hiux*)
basic_machine=hppa1.1-hitachi
os=-hiuxwe2
;;
h8300hms)
basic_machine=h8300-hitachi
os=-hms
;;
h8300xray)
basic_machine=h8300-hitachi
os=-xray
;;
h8500hms)
basic_machine=h8500-hitachi
os=-hms
;;
harris)
basic_machine=m88k-harris
os=-sysv3
;;
hp300-*)
basic_machine=m68k-hp
;;
hp300bsd)
basic_machine=m68k-hp
os=-bsd
;;
hp300hpux)
basic_machine=m68k-hp
os=-hpux
;;
hp3k9[0-9][0-9] | hp9[0-9][0-9])
basic_machine=hppa1.0-hp
;;
hp9k2[0-9][0-9] | hp9k31[0-9])
basic_machine=m68000-hp
;;
hp9k3[2-9][0-9])
basic_machine=m68k-hp
;;
hp9k6[0-9][0-9] | hp6[0-9][0-9])
basic_machine=hppa1.0-hp
;;
hp9k7[0-79][0-9] | hp7[0-79][0-9])
basic_machine=hppa1.1-hp
;;
hp9k78[0-9] | hp78[0-9])
# FIXME: really hppa2.0-hp
basic_machine=hppa1.1-hp
;;
hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893)
# FIXME: really hppa2.0-hp
basic_machine=hppa1.1-hp
;;
hp9k8[0-9][13679] | hp8[0-9][13679])
basic_machine=hppa1.1-hp
;;
hp9k8[0-9][0-9] | hp8[0-9][0-9])
basic_machine=hppa1.0-hp
;;
hppaosf)
basic_machine=hppa1.1-hp
os=-osf
;;
hppro)
basic_machine=hppa1.1-hp
os=-proelf
;;
i370-ibm* | ibm*)
basic_machine=i370-ibm
;;
i*86v32)
basic_machine=`echo "$1" | sed -e 's/86.*/86-pc/'`
os=-sysv32
;;
i*86v4*)
basic_machine=`echo "$1" | sed -e 's/86.*/86-pc/'`
os=-sysv4
;;
i*86v)
basic_machine=`echo "$1" | sed -e 's/86.*/86-pc/'`
os=-sysv
;;
i*86sol2)
basic_machine=`echo "$1" | sed -e 's/86.*/86-pc/'`
os=-solaris2
;;
i386mach)
basic_machine=i386-mach
os=-mach
;;
vsta)
basic_machine=i386-unknown
os=-vsta
;;
iris | iris4d)
basic_machine=mips-sgi
case $os in
-irix*)
;;
*)
os=-irix4
;;
esac
;;
isi68 | isi)
basic_machine=m68k-isi
os=-sysv
;;
leon-*|leon[3-9]-*)
basic_machine=sparc-`echo "$basic_machine" | sed 's/-.*//'`
;;
m68knommu)
basic_machine=m68k-unknown
os=-linux
;;
m68knommu-*)
basic_machine=m68k-`echo "$basic_machine" | sed 's/^[^-]*-//'`
os=-linux
;;
magnum | m3230)
basic_machine=mips-mips
os=-sysv
;;
merlin)
basic_machine=ns32k-utek
os=-sysv
;;
microblaze*)
basic_machine=microblaze-xilinx
;;
mingw64)
basic_machine=x86_64-pc
os=-mingw64
;;
mingw32)
basic_machine=i686-pc
os=-mingw32
;;
mingw32ce)
basic_machine=arm-unknown
os=-mingw32ce
;;
miniframe)
basic_machine=m68000-convergent
;;
*mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*)
basic_machine=m68k-atari
os=-mint
;;
mips3*-*)
basic_machine=`echo "$basic_machine" | sed -e 's/mips3/mips64/'`
;;
mips3*)
basic_machine=`echo "$basic_machine" | sed -e 's/mips3/mips64/'`-unknown
;;
monitor)
basic_machine=m68k-rom68k
os=-coff
;;
morphos)
basic_machine=powerpc-unknown
os=-morphos
;;
moxiebox)
basic_machine=moxie-unknown
os=-moxiebox
;;
msdos)
basic_machine=i386-pc
os=-msdos
;;
ms1-*)
basic_machine=`echo "$basic_machine" | sed -e 's/ms1-/mt-/'`
;;
msys)
basic_machine=i686-pc
os=-msys
;;
mvs)
basic_machine=i370-ibm
os=-mvs
;;
nacl)
basic_machine=le32-unknown
os=-nacl
;;
ncr3000)
basic_machine=i486-ncr
os=-sysv4
;;
netbsd386)
basic_machine=i386-unknown
os=-netbsd
;;
netwinder)
basic_machine=armv4l-rebel
os=-linux
;;
news | news700 | news800 | news900)
basic_machine=m68k-sony
os=-newsos
;;
news1000)
basic_machine=m68030-sony
os=-newsos
;;
news-3600 | risc-news)
basic_machine=mips-sony
os=-newsos
;;
necv70)
basic_machine=v70-nec
os=-sysv
;;
next | m*-next)
basic_machine=m68k-next
case $os in
-nextstep* )
;;
-ns2*)
os=-nextstep2
;;
*)
os=-nextstep3
;;
esac
;;
nh3000)
basic_machine=m68k-harris
os=-cxux
;;
nh[45]000)
basic_machine=m88k-harris
os=-cxux
;;
nindy960)
basic_machine=i960-intel
os=-nindy
;;
mon960)
basic_machine=i960-intel
os=-mon960
;;
nonstopux)
basic_machine=mips-compaq
os=-nonstopux
;;
np1)
basic_machine=np1-gould
;;
neo-tandem)
basic_machine=neo-tandem
;;
nse-tandem)
basic_machine=nse-tandem
;;
nsr-tandem)
basic_machine=nsr-tandem
;;
nsv-tandem)
basic_machine=nsv-tandem
;;
nsx-tandem)
basic_machine=nsx-tandem
;;
op50n-* | op60c-*)
basic_machine=hppa1.1-oki
os=-proelf
;;
openrisc | openrisc-*)
basic_machine=or32-unknown
;;
os400)
basic_machine=powerpc-ibm
os=-os400
;;
OSE68000 | ose68000)
basic_machine=m68000-ericsson
os=-ose
;;
os68k)
basic_machine=m68k-none
os=-os68k
;;
pa-hitachi)
basic_machine=hppa1.1-hitachi
os=-hiuxwe2
;;
paragon)
basic_machine=i860-intel
os=-osf
;;
parisc)
basic_machine=hppa-unknown
os=-linux
;;
parisc-*)
basic_machine=hppa-`echo "$basic_machine" | sed 's/^[^-]*-//'`
os=-linux
;;
pbd)
basic_machine=sparc-tti
;;
pbb)
basic_machine=m68k-tti
;;
pc532 | pc532-*)
basic_machine=ns32k-pc532
;;
pc98)
basic_machine=i386-pc
;;
pc98-*)
basic_machine=i386-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
pentium | p5 | k5 | k6 | nexgen | viac3)
basic_machine=i586-pc
;;
pentiumpro | p6 | 6x86 | athlon | athlon_*)
basic_machine=i686-pc
;;
pentiumii | pentium2 | pentiumiii | pentium3)
basic_machine=i686-pc
;;
pentium4)
basic_machine=i786-pc
;;
pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*)
basic_machine=i586-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
pentiumpro-* | p6-* | 6x86-* | athlon-*)
basic_machine=i686-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*)
basic_machine=i686-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
pentium4-*)
basic_machine=i786-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
pn)
basic_machine=pn-gould
;;
power) basic_machine=power-ibm
;;
ppc | ppcbe) basic_machine=powerpc-unknown
;;
ppc-* | ppcbe-*)
basic_machine=powerpc-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
ppcle | powerpclittle)
basic_machine=powerpcle-unknown
;;
ppcle-* | powerpclittle-*)
basic_machine=powerpcle-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
ppc64) basic_machine=powerpc64-unknown
;;
ppc64-*) basic_machine=powerpc64-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
ppc64le | powerpc64little)
basic_machine=powerpc64le-unknown
;;
ppc64le-* | powerpc64little-*)
basic_machine=powerpc64le-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
ps2)
basic_machine=i386-ibm
;;
pw32)
basic_machine=i586-unknown
os=-pw32
;;
rdos | rdos64)
basic_machine=x86_64-pc
os=-rdos
;;
rdos32)
basic_machine=i386-pc
os=-rdos
;;
rom68k)
basic_machine=m68k-rom68k
os=-coff
;;
rm[46]00)
basic_machine=mips-siemens
;;
rtpc | rtpc-*)
basic_machine=romp-ibm
;;
s390 | s390-*)
basic_machine=s390-ibm
;;
s390x | s390x-*)
basic_machine=s390x-ibm
;;
sa29200)
basic_machine=a29k-amd
os=-udi
;;
sb1)
basic_machine=mipsisa64sb1-unknown
;;
sb1el)
basic_machine=mipsisa64sb1el-unknown
;;
sde)
basic_machine=mipsisa32-sde
os=-elf
;;
sei)
basic_machine=mips-sei
os=-seiux
;;
sequent)
basic_machine=i386-sequent
;;
sh5el)
basic_machine=sh5le-unknown
;;
simso-wrs)
basic_machine=sparclite-wrs
os=-vxworks
;;
sps7)
basic_machine=m68k-bull
os=-sysv2
;;
spur)
basic_machine=spur-unknown
;;
st2000)
basic_machine=m68k-tandem
;;
stratus)
basic_machine=i860-stratus
os=-sysv4
;;
strongarm-* | thumb-*)
basic_machine=arm-`echo "$basic_machine" | sed 's/^[^-]*-//'`
;;
sun2)
basic_machine=m68000-sun
;;
sun2os3)
basic_machine=m68000-sun
os=-sunos3
;;
sun2os4)
basic_machine=m68000-sun
os=-sunos4
;;
sun3os3)
basic_machine=m68k-sun
os=-sunos3
;;
sun3os4)
basic_machine=m68k-sun
os=-sunos4
;;
sun4os3)
basic_machine=sparc-sun
os=-sunos3
;;
sun4os4)
basic_machine=sparc-sun
os=-sunos4
;;
sun4sol2)
basic_machine=sparc-sun
os=-solaris2
;;
sun3 | sun3-*)
basic_machine=m68k-sun
;;
sun4)
basic_machine=sparc-sun
;;
sun386 | sun386i | roadrunner)
basic_machine=i386-sun
;;
sv1)
basic_machine=sv1-cray
os=-unicos
;;
symmetry)
basic_machine=i386-sequent
os=-dynix
;;
t3e)
basic_machine=alphaev5-cray
os=-unicos
;;
t90)
basic_machine=t90-cray
os=-unicos
;;
tile*)
basic_machine=$basic_machine-unknown
os=-linux-gnu
;;
tx39)
basic_machine=mipstx39-unknown
;;
tx39el)
basic_machine=mipstx39el-unknown
;;
toad1)
basic_machine=pdp10-xkl
os=-tops20
;;
tower | tower-32)
basic_machine=m68k-ncr
;;
tpf)
basic_machine=s390x-ibm
os=-tpf
;;
udi29k)
basic_machine=a29k-amd
os=-udi
;;
ultra3)
basic_machine=a29k-nyu
os=-sym1
;;
v810 | necv810)
basic_machine=v810-nec
os=-none
;;
vaxv)
basic_machine=vax-dec
os=-sysv
;;
vms)
basic_machine=vax-dec
os=-vms
;;
vpp*|vx|vx-*)
basic_machine=f301-fujitsu
;;
vxworks960)
basic_machine=i960-wrs
os=-vxworks
;;
vxworks68)
basic_machine=m68k-wrs
os=-vxworks
;;
vxworks29k)
basic_machine=a29k-wrs
os=-vxworks
;;
w65*)
basic_machine=w65-wdc
os=-none
;;
w89k-*)
basic_machine=hppa1.1-winbond
os=-proelf
;;
x64)
basic_machine=x86_64-pc
;;
xbox)
basic_machine=i686-pc
os=-mingw32
;;
xps | xps100)
basic_machine=xps100-honeywell
;;
xscale-* | xscalee[bl]-*)
basic_machine=`echo "$basic_machine" | sed 's/^xscale/arm/'`
;;
ymp)
basic_machine=ymp-cray
os=-unicos
;;
none)
basic_machine=none-none
os=-none
;;
# Here we handle the default manufacturer of certain CPU types. It is in
# some cases the only manufacturer, in others, it is the most popular.
w89k)
basic_machine=hppa1.1-winbond
;;
op50n)
basic_machine=hppa1.1-oki
;;
op60c)
basic_machine=hppa1.1-oki
;;
romp)
basic_machine=romp-ibm
;;
mmix)
basic_machine=mmix-knuth
;;
rs6000)
basic_machine=rs6000-ibm
;;
vax)
basic_machine=vax-dec
;;
pdp11)
basic_machine=pdp11-dec
;;
we32k)
basic_machine=we32k-att
;;
sh[1234] | sh[24]a | sh[24]aeb | sh[34]eb | sh[1234]le | sh[23]ele)
basic_machine=sh-unknown
;;
cydra)
basic_machine=cydra-cydrome
;;
orion)
basic_machine=orion-highlevel
;;
orion105)
basic_machine=clipper-highlevel
;;
mac | mpw | mac-mpw)
basic_machine=m68k-apple
;;
pmac | pmac-mpw)
basic_machine=powerpc-apple
;;
*-unknown)
# Make sure to match an already-canonicalized machine name.
;;
*)
echo Invalid configuration \`"$1"\': machine \`"$basic_machine"\' not recognized 1>&2
exit 1
;;
esac
# Here we canonicalize certain aliases for manufacturers.
case $basic_machine in
*-digital*)
basic_machine=`echo "$basic_machine" | sed 's/digital.*/dec/'`
;;
*-commodore*)
basic_machine=`echo "$basic_machine" | sed 's/commodore.*/cbm/'`
;;
*)
;;
esac
# Decode manufacturer-specific aliases for certain operating systems.
if [ x"$os" != x"" ]
then
case $os in
# First match some system type aliases that might get confused
# with valid system types.
# -solaris* is a basic system type, with this one exception.
-auroraux)
os=-auroraux
;;
-solaris1 | -solaris1.*)
os=`echo $os | sed -e 's|solaris1|sunos4|'`
;;
-solaris)
os=-solaris2
;;
-unixware*)
os=-sysv4.2uw
;;
-gnu/linux*)
os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'`
;;
# es1800 is here to avoid being matched by es* (a different OS)
-es1800*)
os=-ose
;;
# Now accept the basic system types.
# The portable systems comes first.
# Each alternative MUST end in a * to match a version number.
# -sysv* is not here because it comes later, after sysvr4.
-gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \
| -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\
| -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \
| -sym* | -kopensolaris* | -plan9* \
| -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \
| -aos* | -aros* | -cloudabi* | -sortix* \
| -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \
| -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \
| -hiux* | -knetbsd* | -mirbsd* | -netbsd* \
| -bitrig* | -openbsd* | -solidbsd* | -libertybsd* \
| -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \
| -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \
| -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \
| -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \
| -chorusos* | -chorusrdb* | -cegcc* | -glidix* \
| -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \
| -midipix* | -mingw32* | -mingw64* | -linux-gnu* | -linux-android* \
| -linux-newlib* | -linux-musl* | -linux-uclibc* \
| -uxpv* | -beos* | -mpeix* | -udk* | -moxiebox* \
| -interix* | -uwin* | -mks* | -rhapsody* | -darwin* \
| -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \
| -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \
| -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \
| -morphos* | -superux* | -rtmk* | -windiss* \
| -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \
| -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es* \
| -onefs* | -tirtos* | -phoenix* | -fuchsia* | -redox* | -bme* \
| -midnightbsd*)
# Remember, each alternative MUST END IN *, to match a version number.
;;
-qnx*)
case $basic_machine in
x86-* | i*86-*)
;;
*)
os=-nto$os
;;
esac
;;
-nto-qnx*)
;;
-nto*)
os=`echo $os | sed -e 's|nto|nto-qnx|'`
;;
-sim | -xray | -os68k* | -v88r* \
| -windows* | -osx | -abug | -netware* | -os9* \
| -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*)
;;
-mac*)
os=`echo "$os" | sed -e 's|mac|macos|'`
;;
-linux-dietlibc)
os=-linux-dietlibc
;;
-linux*)
os=`echo $os | sed -e 's|linux|linux-gnu|'`
;;
-sunos5*)
os=`echo "$os" | sed -e 's|sunos5|solaris2|'`
;;
-sunos6*)
os=`echo "$os" | sed -e 's|sunos6|solaris3|'`
;;
-opened*)
os=-openedition
;;
-os400*)
os=-os400
;;
-wince*)
os=-wince
;;
-utek*)
os=-bsd
;;
-dynix*)
os=-bsd
;;
-acis*)
os=-aos
;;
-atheos*)
os=-atheos
;;
-syllable*)
os=-syllable
;;
-386bsd)
os=-bsd
;;
-ctix* | -uts*)
os=-sysv
;;
-nova*)
os=-rtmk-nova
;;
-ns2)
os=-nextstep2
;;
-nsk*)
os=-nsk
;;
# Preserve the version number of sinix5.
-sinix5.*)
os=`echo $os | sed -e 's|sinix|sysv|'`
;;
-sinix*)
os=-sysv4
;;
-tpf*)
os=-tpf
;;
-triton*)
os=-sysv3
;;
-oss*)
os=-sysv3
;;
-svr4*)
os=-sysv4
;;
-svr3)
os=-sysv3
;;
-sysvr4)
os=-sysv4
;;
# This must come after -sysvr4.
-sysv*)
;;
-ose*)
os=-ose
;;
-*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*)
os=-mint
;;
-zvmoe)
os=-zvmoe
;;
-dicos*)
os=-dicos
;;
-pikeos*)
# Until real need of OS specific support for
# particular features comes up, bare metal
# configurations are quite functional.
case $basic_machine in
arm*)
os=-eabi
;;
*)
os=-elf
;;
esac
;;
-nacl*)
;;
-ios)
;;
-none)
;;
*)
# Get rid of the `-' at the beginning of $os.
os=`echo $os | sed 's/[^-]*-//'`
echo Invalid configuration \`"$1"\': system \`"$os"\' not recognized 1>&2
exit 1
;;
esac
else
# Here we handle the default operating systems that come with various machines.
# The value should be what the vendor currently ships out the door with their
# machine or put another way, the most popular os provided with the machine.
# Note that if you're going to try to match "-MANUFACTURER" here (say,
# "-sun"), then you have to tell the case statement up towards the top
# that MANUFACTURER isn't an operating system. Otherwise, code above
# will signal an error saying that MANUFACTURER isn't an operating
# system, and we'll never get to this point.
case $basic_machine in
score-*)
os=-elf
;;
spu-*)
os=-elf
;;
*-acorn)
os=-riscix1.2
;;
arm*-rebel)
os=-linux
;;
arm*-semi)
os=-aout
;;
c4x-* | tic4x-*)
os=-coff
;;
c8051-*)
os=-elf
;;
hexagon-*)
os=-elf
;;
tic54x-*)
os=-coff
;;
tic55x-*)
os=-coff
;;
tic6x-*)
os=-coff
;;
# This must come before the *-dec entry.
pdp10-*)
os=-tops20
;;
pdp11-*)
os=-none
;;
*-dec | vax-*)
os=-ultrix4.2
;;
m68*-apollo)
os=-domain
;;
i386-sun)
os=-sunos4.0.2
;;
m68000-sun)
os=-sunos3
;;
m68*-cisco)
os=-aout
;;
mep-*)
os=-elf
;;
mips*-cisco)
os=-elf
;;
mips*-*)
os=-elf
;;
or32-*)
os=-coff
;;
*-tti) # must be before sparc entry or we get the wrong os.
os=-sysv3
;;
sparc-* | *-sun)
os=-sunos4.1.1
;;
pru-*)
os=-elf
;;
*-be)
os=-beos
;;
*-ibm)
os=-aix
;;
*-knuth)
os=-mmixware
;;
*-wec)
os=-proelf
;;
*-winbond)
os=-proelf
;;
*-oki)
os=-proelf
;;
*-hp)
os=-hpux
;;
*-hitachi)
os=-hiux
;;
i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent)
os=-sysv
;;
*-cbm)
os=-amigaos
;;
*-dg)
os=-dgux
;;
*-dolphin)
os=-sysv3
;;
m68k-ccur)
os=-rtu
;;
m88k-omron*)
os=-luna
;;
*-next)
os=-nextstep
;;
*-sequent)
os=-ptx
;;
*-crds)
os=-unos
;;
*-ns)
os=-genix
;;
i370-*)
os=-mvs
;;
*-gould)
os=-sysv
;;
*-highlevel)
os=-bsd
;;
*-encore)
os=-bsd
;;
*-sgi)
os=-irix
;;
*-siemens)
os=-sysv4
;;
*-masscomp)
os=-rtu
;;
f30[01]-fujitsu | f700-fujitsu)
os=-uxpv
;;
*-rom68k)
os=-coff
;;
*-*bug)
os=-coff
;;
*-apple)
os=-macos
;;
*-atari*)
os=-mint
;;
*)
os=-none
;;
esac
fi
# Here we handle the case where we know the os, and the CPU type, but not the
# manufacturer. We pick the logical manufacturer.
vendor=unknown
case $basic_machine in
*-unknown)
case $os in
-riscix*)
vendor=acorn
;;
-sunos*)
vendor=sun
;;
-cnk*|-aix*)
vendor=ibm
;;
-beos*)
vendor=be
;;
-hpux*)
vendor=hp
;;
-mpeix*)
vendor=hp
;;
-hiux*)
vendor=hitachi
;;
-unos*)
vendor=crds
;;
-dgux*)
vendor=dg
;;
-luna*)
vendor=omron
;;
-genix*)
vendor=ns
;;
-mvs* | -opened*)
vendor=ibm
;;
-os400*)
vendor=ibm
;;
-ptx*)
vendor=sequent
;;
-tpf*)
vendor=ibm
;;
-vxsim* | -vxworks* | -windiss*)
vendor=wrs
;;
-aux*)
vendor=apple
;;
-hms*)
vendor=hitachi
;;
-mpw* | -macos*)
vendor=apple
;;
-*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*)
vendor=atari
;;
-vos*)
vendor=stratus
;;
esac
basic_machine=`echo "$basic_machine" | sed "s/unknown/$vendor/"`
;;
esac
echo "$basic_machine$os"
exit
# Local variables:
# eval: (add-hook 'write-file-functions 'time-stamp)
# time-stamp-start: "timestamp='"
# time-stamp-format: "%:y-%02m-%02d"
# time-stamp-end: "'"
# End:
liblognorm-2.0.6/INSTALL 0000644 0001750 0001750 00000036614 13370251154 011650 0000000 0000000 Installation Instructions
*************************
Copyright (C) 1994-1996, 1999-2002, 2004-2016 Free Software
Foundation, Inc.
Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved. This file is offered as-is,
without warranty of any kind.
Basic Installation
==================
Briefly, the shell command './configure && make && make install'
should configure, build, and install this package. The following
more-detailed instructions are generic; see the 'README' file for
instructions specific to this package. Some packages provide this
'INSTALL' file but do not implement all of the features documented
below. The lack of an optional feature in a given package is not
necessarily a bug. More recommendations for GNU packages can be found
in *note Makefile Conventions: (standards)Makefile Conventions.
The 'configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a 'Makefile' in each directory of the package.
It may also create one or more '.h' files containing system-dependent
definitions. Finally, it creates a shell script 'config.status' that
you can run in the future to recreate the current configuration, and a
file 'config.log' containing compiler output (useful mainly for
debugging 'configure').
It can also use an optional file (typically called 'config.cache' and
enabled with '--cache-file=config.cache' or simply '-C') that saves the
results of its tests to speed up reconfiguring. Caching is disabled by
default to prevent problems with accidental use of stale cache files.
If you need to do unusual things to compile the package, please try
to figure out how 'configure' could check whether to do them, and mail
diffs or instructions to the address given in the 'README' so they can
be considered for the next release. If you are using the cache, and at
some point 'config.cache' contains results you don't want to keep, you
may remove or edit it.
The file 'configure.ac' (or 'configure.in') is used to create
'configure' by a program called 'autoconf'. You need 'configure.ac' if
you want to change it or regenerate 'configure' using a newer version of
'autoconf'.
The simplest way to compile this package is:
1. 'cd' to the directory containing the package's source code and type
'./configure' to configure the package for your system.
Running 'configure' might take a while. While running, it prints
some messages telling which features it is checking for.
2. Type 'make' to compile the package.
3. Optionally, type 'make check' to run any self-tests that come with
the package, generally using the just-built uninstalled binaries.
4. Type 'make install' to install the programs and any data files and
documentation. When installing into a prefix owned by root, it is
recommended that the package be configured and built as a regular
user, and only the 'make install' phase executed with root
privileges.
5. Optionally, type 'make installcheck' to repeat any self-tests, but
this time using the binaries in their final installed location.
This target does not install anything. Running this target as a
regular user, particularly if the prior 'make install' required
root privileges, verifies that the installation completed
correctly.
6. You can remove the program binaries and object files from the
source code directory by typing 'make clean'. To also remove the
files that 'configure' created (so you can compile the package for
a different kind of computer), type 'make distclean'. There is
also a 'make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
7. Often, you can also type 'make uninstall' to remove the installed
files again. In practice, not all packages have tested that
uninstallation works correctly, even though it is required by the
GNU Coding Standards.
8. Some packages, particularly those that use Automake, provide 'make
distcheck', which can by used by developers to test that all other
targets like 'make install' and 'make uninstall' work correctly.
This target is generally not run by end users.
Compilers and Options
=====================
Some systems require unusual options for compilation or linking that
the 'configure' script does not know about. Run './configure --help'
for details on some of the pertinent environment variables.
You can give 'configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here is
an example:
./configure CC=c99 CFLAGS=-g LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
====================================
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you can use GNU 'make'. 'cd' to the
directory where you want the object files and executables to go and run
the 'configure' script. 'configure' automatically checks for the source
code in the directory that 'configure' is in and in '..'. This is known
as a "VPATH" build.
With a non-GNU 'make', it is safer to compile the package for one
architecture at a time in the source code directory. After you have
installed the package for one architecture, use 'make distclean' before
reconfiguring for another architecture.
On MacOS X 10.5 and later systems, you can create libraries and
executables that work on multiple system types--known as "fat" or
"universal" binaries--by specifying multiple '-arch' options to the
compiler but only a single '-arch' option to the preprocessor. Like
this:
./configure CC="gcc -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
CXX="g++ -arch i386 -arch x86_64 -arch ppc -arch ppc64" \
CPP="gcc -E" CXXCPP="g++ -E"
This is not guaranteed to produce working output in all cases, you
may have to build one architecture at a time and combine the results
using the 'lipo' tool if you have problems.
Installation Names
==================
By default, 'make install' installs the package's commands under
'/usr/local/bin', include files under '/usr/local/include', etc. You
can specify an installation prefix other than '/usr/local' by giving
'configure' the option '--prefix=PREFIX', where PREFIX must be an
absolute file name.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
pass the option '--exec-prefix=PREFIX' to 'configure', the package uses
PREFIX as the prefix for installing programs and libraries.
Documentation and other data files still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like '--bindir=DIR' to specify different values for particular
kinds of files. Run 'configure --help' for a list of the directories
you can set and what kinds of files go in them. In general, the default
for these options is expressed in terms of '${prefix}', so that
specifying just '--prefix' will affect all of the other directory
specifications that were not explicitly provided.
The most portable way to affect installation locations is to pass the
correct locations to 'configure'; however, many packages provide one or
both of the following shortcuts of passing variable assignments to the
'make install' command line to change installation locations without
having to reconfigure or recompile.
The first method involves providing an override variable for each
affected directory. For example, 'make install
prefix=/alternate/directory' will choose an alternate location for all
directory configuration variables that were expressed in terms of
'${prefix}'. Any directories that were specified during 'configure',
but not in terms of '${prefix}', must each be overridden at install time
for the entire installation to be relocated. The approach of makefile
variable overrides for each directory variable is required by the GNU
Coding Standards, and ideally causes no recompilation. However, some
platforms have known limitations with the semantics of shared libraries
that end up requiring recompilation when using this method, particularly
noticeable in packages that use GNU Libtool.
The second method involves providing the 'DESTDIR' variable. For
example, 'make install DESTDIR=/alternate/directory' will prepend
'/alternate/directory' before all installation names. The approach of
'DESTDIR' overrides is not required by the GNU Coding Standards, and
does not work on platforms that have drive letters. On the other hand,
it does better at avoiding recompilation issues, and works well even
when some directory options were not specified in terms of '${prefix}'
at 'configure' time.
Optional Features
=================
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving 'configure' the
option '--program-prefix=PREFIX' or '--program-suffix=SUFFIX'.
Some packages pay attention to '--enable-FEATURE' options to
'configure', where FEATURE indicates an optional part of the package.
They may also pay attention to '--with-PACKAGE' options, where PACKAGE
is something like 'gnu-as' or 'x' (for the X Window System). The
'README' should mention any '--enable-' and '--with-' options that the
package recognizes.
For packages that use the X Window System, 'configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the 'configure' options '--x-includes=DIR' and
'--x-libraries=DIR' to specify their locations.
Some packages offer the ability to configure how verbose the
execution of 'make' will be. For these packages, running './configure
--enable-silent-rules' sets the default to minimal output, which can be
overridden with 'make V=1'; while running './configure
--disable-silent-rules' sets the default to verbose, which can be
overridden with 'make V=0'.
Particular systems
==================
On HP-UX, the default C compiler is not ANSI C compatible. If GNU CC
is not installed, it is recommended to use the following options in
order to use an ANSI C compiler:
./configure CC="cc -Ae -D_XOPEN_SOURCE=500"
and if that doesn't work, install pre-built binaries of GCC for HP-UX.
HP-UX 'make' updates targets which have the same time stamps as their
prerequisites, which makes it generally unusable when shipped generated
files such as 'configure' are involved. Use GNU 'make' instead.
On OSF/1 a.k.a. Tru64, some versions of the default C compiler cannot
parse its '' header file. The option '-nodtk' can be used as a
workaround. If GNU CC is not installed, it is therefore recommended to
try
./configure CC="cc"
and if that doesn't work, try
./configure CC="cc -nodtk"
On Solaris, don't put '/usr/ucb' early in your 'PATH'. This
directory contains several dysfunctional programs; working variants of
these programs are available in '/usr/bin'. So, if you need '/usr/ucb'
in your 'PATH', put it _after_ '/usr/bin'.
On Haiku, software installed for all users goes in '/boot/common',
not '/usr/local'. It is recommended to use the following options:
./configure --prefix=/boot/common
Specifying the System Type
==========================
There may be some features 'configure' cannot figure out
automatically, but needs to determine by the type of machine the package
will run on. Usually, assuming the package is built to be run on the
_same_ architectures, 'configure' can figure that out, but if it prints
a message saying it cannot guess the machine type, give it the
'--build=TYPE' option. TYPE can either be a short name for the system
type, such as 'sun4', or a canonical name which has the form:
CPU-COMPANY-SYSTEM
where SYSTEM can have one of these forms:
OS
KERNEL-OS
See the file 'config.sub' for the possible values of each field. If
'config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the option '--target=TYPE' to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with '--host=TYPE'.
Sharing Defaults
================
If you want to set default values for 'configure' scripts to share,
you can create a site shell script called 'config.site' that gives
default values for variables like 'CC', 'cache_file', and 'prefix'.
'configure' looks for 'PREFIX/share/config.site' if it exists, then
'PREFIX/etc/config.site' if it exists. Or, you can set the
'CONFIG_SITE' environment variable to the location of the site script.
A warning: not all 'configure' scripts look for a site script.
Defining Variables
==================
Variables not defined in a site shell script can be set in the
environment passed to 'configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the 'configure' command line, using 'VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
causes the specified 'gcc' to be used as the C compiler (unless it is
overridden in the site shell script).
Unfortunately, this technique does not work for 'CONFIG_SHELL' due to an
Autoconf limitation. Until the limitation is lifted, you can use this
workaround:
CONFIG_SHELL=/bin/bash ./configure CONFIG_SHELL=/bin/bash
'configure' Invocation
======================
'configure' recognizes the following options to control how it
operates.
'--help'
'-h'
Print a summary of all of the options to 'configure', and exit.
'--help=short'
'--help=recursive'
Print a summary of the options unique to this package's
'configure', and exit. The 'short' variant lists options used only
in the top level, while the 'recursive' variant lists options also
present in any nested packages.
'--version'
'-V'
Print the version of Autoconf used to generate the 'configure'
script, and exit.
'--cache-file=FILE'
Enable the cache: use and save the results of the tests in FILE,
traditionally 'config.cache'. FILE defaults to '/dev/null' to
disable caching.
'--config-cache'
'-C'
Alias for '--cache-file=config.cache'.
'--quiet'
'--silent'
'-q'
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to '/dev/null' (any error
messages will still be shown).
'--srcdir=DIR'
Look for the package's source code in directory DIR. Usually
'configure' can determine that directory automatically.
'--prefix=DIR'
Use DIR as the installation prefix. *note Installation Names:: for
more details, including other options available for fine-tuning the
installation locations.
'--no-create'
'-n'
Run the configure checks, but stop before creating any output
files.
'configure' also accepts some other, not widely useful, options. Run
'configure --help' for more details.
liblognorm-2.0.6/doc/ 0000755 0001750 0001750 00000000000 13370251163 011432 5 0000000 0000000 liblognorm-2.0.6/doc/introduction.rst 0000644 0001750 0001750 00000003044 13273030617 014627 0000000 0000000 Introduction
============
Briefly described, liblognorm is a tool to normalize log data.
People who need to take a look at logs often have a common problem. Logs
from different machines (from different vendors) usually have different
formats. Even if it is the same type of log (e.g. from firewalls), the log
entries are so different, that it is pretty hard to read these. This is
where liblognorm comes into the game. With this tool you can normalize all
your logs. All you need is liblognorm and its dependencies and a sample
database that fits the logs you want to normalize.
So, for example, if you have traffic logs from three different firewalls,
liblognorm will be able to "normalize" the events into generic ones. Among
others, it will extract source and destination ip addresses and ports and
make them available via well-defined fields. As the end result, a common log
analysis application will be able to work on that common set and so this
backend will be independent from the actual firewalls feeding it. Even
better, once we have a well-understood interim format, it is also easy to
convert that into any other vendor specific format, so that you can use that
vendor's analysis tool.
By design, liblognorm is constructed as a library. Thus, it can be used by
other tools.
In short, liblognorm works by:
1. Matching a line to a rule from predefined configuration;
2. Picking out variable fields from the line;
3. Returning them as a JSON hash object.
Then, a consumer of this object can construct new, normalized log line
on its own.
liblognorm-2.0.6/doc/installation.rst 0000644 0001750 0001750 00000004205 13273030617 014607 0000000 0000000 How to install
==============
Here you can find the first steps to install and try liblognorm.
Getting liblognorm
------------------
There are several ways to install libognorm. You can install it
from your distribution, if it is there. You can get binary packages from
Rsyslog repositories:
- `RedHat Enterprise Linux or CentOS `_
- `Ubuntu `_
- `Debian `_
Or you can build your own binaries from sources. You can fetch all
sources from git (below you can find all commands you need) or you can
download it as tarballs at:
- `libestr `_
- `liblognorm `_
Please note if you compile it from tarballs then you have to do the same
steps which are mentioned below, apart from::
$ git clone ...
$ autoreconf -vfi
Building from git
-----------------
To build liblognorm from sources, you need to have
`json-c `_ installed.
Open a terminal and switch to the folder where you want to build
liblognorm. Below you will find the necessary commands. First, build
and install prerequisite library called **libestr**::
$ git clone git://git.adiscon.com/git/libestr.git
$ cd libestr
$ autoreconf -vfi
$ ./configure
$ make
$ sudo make install
leave that folder and repeat this step again for liblognorm::
$ cd ..
$ git clone git://git.adiscon.com/git/liblognorm.git
$ cd liblognorm
$ autoreconf -vfi
$ ./configure
$ make
$ sudo make install
That’s all you have to do.
Testing
-------
For a first test we need two further things, a test log and the rulebase.
Both can be downloaded `here
`_.
After downloading these examples you can use liblognorm. Go to
liblognorm/src and use the command below::
$ ./lognormalize -r messages.sampdb -o json