happy-2.1.7/0000755000000000000000000000000007346545000011033 5ustar0000000000000000happy-2.1.7/ChangeLog.md0000644000000000000000000002520307346545000013206 0ustar0000000000000000# Revision history for Happy ## 2.1.7 * Add support for `{-# OPTIONS_HAPPY ... #-}` pragmas ([#342](https://github.com/haskell/happy/issues/342)). * Tested with GHC 8.0 - 9.12.2. The Haskell code generated by Happy is for GHC 8.0 and up. ## 2.1.6 * No longer emit `{-# LANGUAGE PartialTypeSignatures #-}` twice ([#339](https://github.com/haskell/happy/issues/339)). * No longer emit `import Data.Function` twice ([#341](https://github.com/haskell/happy/issues/341)). * Tested with GHC 8.0 - 9.12.2. The Haskell code generated by Happy is for GHC 8.0 and up. ## 2.1.5 Add `Data.Tuple` to `Happy_Prelude` in order to fix the `--debug` build (#330). ## 2.1.4 Move `Paths_happy_lib` into `tabular` to prevent a Cabal bug concerning --libsubdir (#328). It is likely that this release fixes `cabal v1-install happy` as well, which was broken since happy-2.0 (#315). ## 2.1.3 Auto-resolve shift/reduce conflicts involving the catch token. This was to support on going work in GHC to utilise the catch token. ## 2.1.2 Fix a breaking change (#325) introduced by the previous fix for #131. Prelude is no longer used by Happy. ## 2.1.1 This release fixes two breaking changes: * Properly qualify all uses of Prelude functions, fixing #131 * Bring back the old `%errorhandlertype` directive, the use of which is discouraged in favour of the "Reporting expected tokens" mechanism in Happy 2.1, accesible via `%error.expected`. ## 2.1 * Added `--numeric-version` CLI flag. * Documented and implemented the new feature "Resumptive parsing with ``catch``" * Documented (and reimplemented) the "Reporting expected tokens" feature (which turned to cause a breaking change in this release: #320) ## 2.0.2 The 2.0.1 release in turn exposed two more regressions: * Generated code uses PatternGuards without declaring it (#309) * Use of `happy-lib:*` syntax to depend on private library components triggered a bug in Cabal versions 3.0 and 3.2 (#311) This release fixes both. ## 2.0.1 The 2.0 release changed the indentation character from tabs to two spaces, triggering an unforced breaking change in GHC (#303). This release provides the fix by using eight spaces for indentation. ## 2.0 There are two main breaking changes in this release: 1. Removed non-array, non-GHC modes, so flags `-ag` are the default now and become no-ops. 2. Generated parsers now activate the language extension `-XNoStrictData` without which every use of a happy parser would lead to an immediate crash (#273). This causes us to drop support for GHC < 8.0. Furthermore, the project structure was modularized and a library `happy-lib` containing the implmentation of the `happy` executable was extracted. Quite similar to the situation with GHC vs. the GHC API, we expect that `happy` will continue to be a stable CLI tool with solid (if occasionally out of date) documentation, while the design, documentation and implementation of `happy-lib` is still in flux and use is only recommended to expert users. Other, more minor changes: * Revert the new bootstrapping system of 1.21.0 to mitigate build issues (#255, #274). * Encode action table offsets in 32 bit instead of 16 bit (#93, #199, #266). This increases the size of generated parsers a bit (about 250KB for GHC's parser), but also manages to generate parsers for grammars that were previously running into the size limit (#199). * The documentation has been converted to ReStructuredText, hosted at https://haskell-happy.readthedocs.io/en/latest/ (#226) * A few internal refactorings to the structure of generated code. ## 1.21.0 The main focus of this release was revamping the build system and bootstrapping. * The release no longer contains generated source code. Instead of simply requiring a pre-built bootstrap version of Happy in that event, we have a parser-combination-based implementation of enough of Happy to bootstrap the rest. (Currently, the bootstrap version is everything but attribute grammars, and thus sufficient for e.g. GHC, but this is subject to change.) The bootstrap version of Happy is then sufficient to build Happy once again with all features enabled. Note, this means users of attribute grammars will have to modify the way they build happy if they were previously building from Hackage relying on the pre-generated sources. * Rather than creating many "templates" at build time, there is a single combined template. Different implementations are chosen using CPP, as was already done within the templates before. * Some imports were tightened down, which may help building with newer versions of `base`. ## 1.20.1 * Fix for building with mtl-2.3.1 (GHC 9.6) ## 1.20.0 * Fix #121: the -i flag produces an .info file even if the `%expect` pragma is violated * Fix #131: qualify uses of Prelude functions in generated code * Fix #161: drop fewer parse items when generating an .info file * Introduce the `%shift` directive to resolve shift/reduce conflicts explicitly, useful in conjunction with `%expect 0` * Remove the deprecated build configuration flag `small_base` ## 1.19.12 * Fix for building with GHC 8.8.x * Move custom Setup preprocessing steps into a separate executable, like Alex ## 1.19.11 * Fix for building with GHC 8.6.x ## 1.19.10 * Fix polymorphic (rank-n) non-terminals * Fix for GHC 8.8.1 ## 1.19.9 * Fix cabal warnings * Bump upper bounds * Fix build with GHC 8.4.1-alpha ## 1.19.8 * Fix issue #94 (some grammars don't compile due to new type signatures introduced to allow overloading to be used) ## 1.19.7 * Fix missing test suite files in the sdist ## 1.19.6 * Manually generate Parser.hs using Makefile before sdist, to fix bootstrapping problems with cabal sandboxes & new-build * Documentation fixes * Fixed GLR support * New option `-p`/`--pretty` prints the grammar rules (only) to a file * Added generation of additional type signatures to enable use of typeclasses in monadic parsers. ## 1.19.5 * Fixes for GHC 7.10 * Code cleanups (thanks Index Int ) ## 1.19.4 * Fix for GHC 7.10 (Applicative/Monad, #19, #21) ## 1.19.3 * Fix for GHC 7.2 (#16) ## 1.19.2 * Fixes for clang (XCode 5) ## 1.19.1 * Repackaged to build with GHC 7.7+ ## 1.19 * Necessary changes to work with GHC 7.8 ## 1.18.10 * Fix build with GHC 7.6 ## 1.18.8 * Fix a packaging bug (cabal-install-0.10.2 didn't put the Happy-generated files in the sdist) ## 1.18.7 * Fix a bug in error handling when using `%monad` without `%lexer` ## 1.18.5 --- 17 Jun 2010 ## 1.18.4 --- 23 April 2009 ## 1.18.2 --- 5 November 2008 ## 1.18.1 --- 14 October 2008 ## 1.18 --- 13 October 2008 * New feature: EBNF-style paramterized macros, thanks to Iavor Diatchki. * Works with Cabal 1.2, 1.4 and 1.6 * A few minor bugfixes ## 1.17 --- 22 October 2007 * Cabal 1.2 is required * Works with upcoming GHC 6.8.1 * Fix the `parE` bug (poor error message for errors in the grammar) * Some performance improvements to Happy itself ## 1.16 --- 8 January 2007 * Switch to a Cabal build system: you need a recent version of Cabal (1.1.6 or later). If you have GHC 6.4.2, then you need to upgrade Cabal before building Happy. GHC 6.6 is fine. * New `%error` directive * New production forms: `{%% .. }` and `{%^ .. }` * Added Attribute Grammar support, by Robert Dockins ## 1.15 --- 14 January 2005 * New `%expect` directive * The list of tokens passed to happyError now includes the current token (not `%lexer`). * Added support for ambiguous grammars via Generalized LR parsing * Added `%partial` to indicate a parser that can return a result before EOF is reached. ## 1.14 --- 14 April 2004 * New meta-variable `$>` represents the rightmost token. * Happy's OPTIONS pragma is merged with an existing one in the grammar file, if any. ## 1.13 --- 19 June 2002 * Support for newer versions of GHC (>= 5.04). * Addition of an experimental flag: `--strict`. ## 1.11 --- 25 September 2001 * Tokens no longer have a default precedence --- if you want a token to have a precedence, you have to declare it. * Bugfix to templates for GHC on 64-bit platforms. ## 1.10 * Bugfixes, and minor performance improvements, * Most of the examples work again. ## 1.9 * A grammar may now contain several entry points, allowing several parsers to share parts of the grammar. * Some bugfixes. ## 1.8 * Parser table compression, and more efficient table encoding when used with GHC. Large grammars can now be compiled in much less time/space than before using GHC. * Yacc-style operator precedence, thanks to patches from Hermann Oliveira Rodrigues and Josef Svenningsson . * A debug option which causes the generated parser to print tracing information at each step during parsing. ## 1.6 * Now written in, and generates, Haskell 98. * Several bug fixes. * A new option, `-c`, generates parsers that use GHC's `unsafeCoerce#` primitive to speed up parsing and cut down the binary size. The `-c` option can only be used with the -g (GHC extensions) option. * Parsers generated with the -g option will compile to smaller binaries now --- some sources of parser-bloat were identified and squished. * Happy has a new Open Source license, based on the BSD license. * A sample Haskell parser using Happy is included. ## 1.5 * Many bug fixes to the error recovery support, found by experimenting with the Haskell grammar and layout. * Happy is about 5 times faster on large examples, due to some changes in the LALR(1) algorithms. As of version 1.5, Happy is capable of parsing full Haskell. We have a Haskell parser that uses Happy, which will shortly be part of the library collection distributed with GHC. ## 1.2 * Supports Haskell 1.4 * Lots of bugs fixed * Performance: the parser generator is at least 20% faster, and generated parsers should be faster due to the replacement of a data type with a newtype. * Simple error recovery: designed to be enough to implement the Haskell layout rule. * Revamped monad support: the monad can now be threaded through the lexer, enabling passing of state between the parser and the lexer (handy for the Haskell layout rule), and consistent error handling. * The `%newline` feature is removed, the same effect can be achieved using the new monad support. ## 0.9 * Happy should be much faster than before. * Generated parsers will be 5-10% smaller. * Happy now compiles with ghc-0.26. * Support for monadic parsers via `%monad` (see the documentation). * New syntax: previously ```haskell f :: { } f : ... | ... etc. ``` can now be written ```haskell f :: { } : ... | ... etc. ``` (i.e. omit the extra `f`. It was always ignored anyway :-) * Miscellaneous bug fixes. happy-2.1.7/LICENSE0000644000000000000000000000266507346545000012051 0ustar0000000000000000The Happy License ----------------- Copyright 2001, Simon Marlow and Andy Gill. All rights reserved. Extensions to implement Tomita's Generalized LR parsing: Copyright 2004, University of Durham, Paul Callaghan and Ben Medlock. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: - Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. - Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. happy-2.1.7/Makefile0000644000000000000000000000243607346545000012500 0ustar0000000000000000CABAL = cabal HAPPY_VER = `awk '/^version:/ { print $$2 }' happy.cabal` SDIST_DIR=dist-newstyle/sdist sdist :: @case "`$(CABAL) --numeric-version`" in \ 2.[2-9].* | [3-9].* ) ;; \ * ) echo "Error: needs cabal 2.2.0.0 or later (but got : `$(CABAL) --numeric-version`)" ; exit 1 ;; \ esac @if [ "`git status -s`" != '' ]; then \ echo "Error: Tree is not clean"; \ exit 1; \ fi $(CABAL) v2-sdist all @if [ ! -f "${SDIST_DIR}/happy-$(HAPPY_VER).tar.gz" ]; then \ echo "Error: source tarball not found: dist/happy-$(HAPPY_VER).tar.gz"; \ exit 1; \ fi git checkout . git clean -f sdist-test :: sdist sdist-test-only @rm -rf "${SDIST_DIR}/happy-${HAPPY_VER}/" sdist-test-only :: @if [ ! -f "${SDIST_DIR}/happy-$(HAPPY_VER).tar.gz" ]; then \ echo "Error: source tarball not found: ${SDIST_DIR}/happy-$(HAPPY_VER).tar.gz"; \ exit 1; \ fi rm -rf "${SDIST_DIR}/happy-$(HAPPY_VER)/" tar -xf "${SDIST_DIR}/happy-$(HAPPY_VER).tar.gz" -C ${SDIST_DIR}/ echo "packages: ." > "${SDIST_DIR}/happy-$(HAPPY_VER)/cabal.project" echo "tests: True" >> "${SDIST_DIR}/happy-$(HAPPY_VER)/cabal.project" cd "${SDIST_DIR}/happy-$(HAPPY_VER)/" \ && cabal v2-build all \ && cabal v2-test all -j @echo "" @echo "Success! ${SDIST_DIR}/happy-$(HAPPY_VER).tar.gz is ready for distribution!" @echo "" happy-2.1.7/README.md0000644000000000000000000000160007346545000012307 0ustar0000000000000000# Happy [![Build Status](https://github.com/haskell/happy/actions/workflows/haskell-ci.yml/badge.svg)](https://github.com/haskell/happy/actions) Happy is a parser generator for Haskell 98 (and later). Happy Parsing! ## Documentation Documentation is hosted on [Read the Docs](https://haskell-happy.readthedocs.io): - [Online (HTML)](https://haskell-happy.readthedocs.io) - [PDF](https://haskell-happy.readthedocs.io/_/downloads/en/latest/pdf/) - [Downloadable HTML](https://haskell-happy.readthedocs.io/_/downloads/en/latest/htmlzip/) For basic information of the sort typically found in a read-me, see the following sections of the docs: - [Introduction](https://haskell-happy.readthedocs.io/en/latest/introduction.html) - [Obtaining Happy](https://haskell-happy.readthedocs.io/en/latest/obtaining.html) - [Contributing](https://haskell-happy.readthedocs.io/en/latest/contributing.html) happy-2.1.7/Setup.hs0000644000000000000000000000005607346545000012470 0ustar0000000000000000import Distribution.Simple main = defaultMain happy-2.1.7/app/0000755000000000000000000000000007346545000011613 5ustar0000000000000000happy-2.1.7/app/Main.lhs0000644000000000000000000004072107346545000013213 0ustar0000000000000000----------------------------------------------------------------------------- The main driver. (c) 1993-2003 Andy Gill, Simon Marlow GLR amendments (c) University of Durham, Ben Medlock 2001 ----------------------------------------------------------------------------- > module Main (main) where Path settings auto-generated by Cabal: > import Paths_happy > import Happy.Grammar > import Happy.Frontend > import Happy.Frontend.AbsSyn > import Happy.Frontend.Mangler > import Happy.Frontend.PrettyGrammar > import Happy.Backend.LALR > import Happy.Backend.LALR.ProduceCode (produceParser) > import Happy.Backend.GLR > import Happy.Backend.GLR.ProduceCode > import Happy.Tabular > import Happy.Tabular.Info (genInfoFile) > import System.Console.GetOpt > import Control.Monad ( liftM, when, unless ) > import System.Environment > import System.Exit (exitWith, ExitCode(..)) > import Data.Char > import Data.List ( union ) > import System.IO > import Data.List( isSuffixOf ) > import Data.Version ( showVersion ) > main :: IO () > main = do Read and parse the CLI arguments. > args <- getArgs > main2 args > main2 :: [String] -> IO () > main2 args = Read and parse the CLI arguments. > case getOpt Permute argInfo (constArgs ++ args) of > (cli,_,[]) | DumpVersion `elem` cli -> > bye copyright > (cli,_,[]) | DumpNumericVersion `elem` cli -> > bye projectVersion > (cli,_,[]) | DumpHelp `elem` cli -> do > prog <- getProgramName > bye (usageInfo (usageHeader prog) argInfo) > (cli,[fl_name],[]) -> > runParserGen cli fl_name > (_,_,errors) -> do > prog <- getProgramName > die (concat errors ++ > usageInfo (usageHeader prog) argInfo) > where > runParserGen cliOpts fl_name = do Open the file. > fl <- readFile fl_name > (name, file) <- case fileNameAndType fl_name of > Nothing -> die ("`" ++ fl_name ++ "' does not end in `.y' or `.ly'\n") > Just (name, Y) -> return (name, fl) > Just (name, LY) -> return (name, deLitify fl) Parse, using bootstrapping parser. > (BookendedAbsSyn pragma hd abssyn tl) <- case parseYFileContents file of > Left err -> die (fl_name ++ ':' : err) > Right bas -> return bas Combine options set via OPTIONS_HAPPY with those provided via the CLI. > opts <- > case pragma of > Just happyPragma -> do > let (pragmaName, pragmaArgs) = break isSpace happyPragma > when (pragmaName /= "OPTIONS_HAPPY") $ > die ("Unknown happy pragma '" ++ pragmaName ++ "', expected OPTIONS_HAPPY") > case getOpt Permute argInfo (words pragmaArgs) of > (pragmaOpts,_,[]) -> return (union pragmaOpts cliOpts) > (_,_,errors) -> die ("Invalid options in OPTIONS_HAPPY: " ++ concat errors) > Nothing -> return cliOpts If no -g flag has been passed, show a warning. > unless (OptGhcTarget `elem` opts) $ > hPutStrLn stderr "Warning: With happy 2.0, the --ghc flag has become non-optional. To suppress this warning, pass the --ghc flag." Mangle the syntax into something useful. > (g, mAg, common_options) <- case {-# SCC "Mangler" #-} mangler fl_name abssyn of > Left s -> die (unlines s ++ "\n") > Right gd -> return gd > optPrint opts DumpMangle $ putStr $ show g > let select_reductions | OptGLR `elem` opts = select_all_reductions > | otherwise = select_first_reduction > let tables = genTables select_reductions g > sets = lr0items tables > lainfo = (la_prop tables, la_spont tables) > la = lookaheads tables > goto = gotoTable tables > action = actionTable tables > (conflictArray,(sr,rr)) = conflicts tables Debug output > optPrint opts DumpLR0 $ putStr $ show sets > optPrint opts DumpAction $ putStr $ show action > optPrint opts DumpGoto $ putStr $ show goto > optPrint opts DumpLA $ putStr $ show lainfo > optPrint opts DumpLA $ putStr $ show la Report any unused rules and terminals > let (unused_rules, unused_terminals) = redundancies tables > when (not (null unused_rules)) > (hPutStrLn stderr ("unused rules: " ++ show (length unused_rules))) > when (not (null unused_terminals)) > (hPutStrLn stderr ("unused terminals: " ++ show (length unused_terminals))) Print out the info file. > info_filename <- getInfoFileName name opts > let info = genInfoFile > (map fst sets) > g > action > goto > conflictArray > fl_name > unused_rules > unused_terminals > version > case info_filename of > Just s -> do > writeFile s info > hPutStrLn stderr ("Grammar info written to: " ++ s) > Nothing -> return () Pretty print the AbsSyn. > pretty_filename <- getPrettyFileName name opts > case pretty_filename of > Just s -> do > let out = render (ppAbsSyn abssyn) > writeFile s out > hPutStrLn stderr ("Production rules written to: " ++ s) > Nothing -> return () Report any conflicts in the grammar. > case expect common_options of > Just n | n == sr && rr == 0 -> return () > Just _ | rr > 0 -> > die ("The grammar has reduce/reduce conflicts.\n" ++ > "This is not allowed when an expect directive is given\n") > Just _ -> > die ("The grammar has " ++ show sr ++ > " shift/reduce conflicts.\n" ++ > "This is different from the number given in the " ++ > "expect directive\n") > _ -> do > (if sr /= 0 > then hPutStrLn stderr ("shift/reduce conflicts: " ++ show sr) > else return ()) > (if rr /= 0 > then hPutStrLn stderr ("reduce/reduce conflicts: " ++ show rr) > else return ()) Now, let's get on with generating the parser. Firstly, find out what kind of code we should generate, and where it should go: > outfilename <- getOutputFileName fl_name opts > opt_coerce <- getCoerce opts > opt_strict <- getStrict opts > opt_debug <- getDebug opts Add any special options or imports required by the parsing machinery. > let > header = Just $ > (case hd of Just s -> s; Nothing -> "") > ++ importsToInject opt_debug > if OptGLR `elem` opts %--------------------------------------- Branch off to GLR parser production > then do > let > glr_decode > | OptGLR_Decode `elem` opts = TreeDecode > | otherwise = LabelDecode > filtering > | OptGLR_Filter `elem` opts = UseFiltering > | otherwise = NoFiltering > ghc_exts = UseGhcExts > (importsToInject opt_debug) Unlike below, don't always pass CPP, because only one of the files needs it. > (langExtsToInject) > template' <- getTemplate glrBackendDataDir opts > let basename = takeWhile (/='.') outfilename > let tbls = (action,goto) > (parseName,_,_,_) <- case starts g of > [s] -> return s > s:_ -> do > putStrLn "GLR-Happy doesn't support multiple start points (yet)" > putStrLn "Defaulting to first start point." > return s > [] -> error "produceGLRParser: []" > base <- readFile (baseTemplate template') > lib <- readFile (libTemplate template') > let (dat, parser) = produceGLRParser > (base, lib) -- templates > basename -- basename of specified output file name > tbls -- action table (:: ActionTable) > -- goto table (:: GotoTable) > parseName > header -- header from grammar spec > tl -- trailer from grammar spec > (opt_debug, (glr_decode,filtering,ghc_exts)) > -- controls decoding code-gen > g -- grammar object > common_options -- grammar object > writeFile (basename ++ "Data.hs") dat > writeFile (basename ++ ".hs") parser %--------------------------------------- Resume normal (ie, non-GLR) processing > else do > template' <- getTemplate lalrBackendDataDir opts > let > template = template' ++ "/HappyTemplate.hs" Read in the template file for this target: > templ <- readFile template and generate the code. > magic_name <- getMagicName opts > let > outfile = produceParser > g > mAg > common_options > action > goto CPP is needed in all cases with unified template > ("CPP" : langExtsToInject) > header > tl > opt_coerce > opt_strict > defines' = defines opt_debug opt_coerce > (if outfilename == "-" then putStr else writeFile outfilename) > (magicFilter magic_name (outfile ++ defines' ++ templ)) Successfully Finished. ----------------------------------------------------------------------------- > getProgramName :: IO String > getProgramName = liftM (`withoutSuffix` ".bin") getProgName > where str' `withoutSuffix` suff > | suff `isSuffixOf` str' = take (length str' - length suff) str' > | otherwise = str' > bye :: String -> IO a > bye s = putStr s >> exitWith ExitSuccess > die :: String -> IO a > die s = hPutStr stderr s >> exitWith (ExitFailure 1) > dieHappy :: String -> IO a > dieHappy s = getProgramName >>= \prog -> die (prog ++ ": " ++ s) > optPrint :: [CLIFlags] -> CLIFlags -> IO () -> IO () > optPrint cli pass io = > when (elem pass cli) (putStr "\n---------------------\n" >> io) > constArgs :: [String] > constArgs = [] ------------------------------------------------------------------------------ The command line arguments. > data CLIFlags = > DumpMangle > | DumpLR0 > | DumpAction > | DumpGoto > | DumpLA > | DumpVersion > | DumpNumericVersion > | DumpHelp > | OptInfoFile (Maybe String) > | OptPrettyFile (Maybe String) > | OptTemplate String > | OptMagicName String > | OptGhcTarget > | OptArrayTarget > | OptUseCoercions > | OptDebugParser > | OptStrict > | OptOutputFile String > | OptGLR > | OptGLR_Decode > | OptGLR_Filter > deriving Eq > argInfo :: [OptDescr CLIFlags] > argInfo = [ > Option ['o'] ["outfile"] (ReqArg OptOutputFile "FILE") > "write the output to FILE (default: file.hs)", > Option ['i'] ["info"] (OptArg OptInfoFile "FILE") > "put detailed grammar info in FILE", > Option ['p'] ["pretty"] (OptArg OptPrettyFile "FILE") > "pretty print the production rules to FILE", > Option ['t'] ["template"] (ReqArg OptTemplate "DIR") > "look in DIR for template files", > Option ['m'] ["magic-name"] (ReqArg OptMagicName "NAME") > "use NAME as the symbol prefix instead of \"happy\"", > Option ['s'] ["strict"] (NoArg OptStrict) > "evaluate semantic values strictly (experimental)", > Option ['g'] ["ghc"] (NoArg OptGhcTarget) > "use GHC extensions", > Option ['c'] ["coerce"] (NoArg OptUseCoercions) > "use type coercions (only available with -g)", > Option ['a'] ["array"] (NoArg OptArrayTarget) > "generate an array-based parser", > Option ['d'] ["debug"] (NoArg OptDebugParser) > "produce a debugging parser (only with -a)", > Option ['l'] ["glr"] (NoArg OptGLR) > "Generate a GLR parser for ambiguous grammars", > Option ['k'] ["decode"] (NoArg OptGLR_Decode) > "Generate simple decoding code for GLR result", > Option ['f'] ["filter"] (NoArg OptGLR_Filter) > "Filter the GLR parse forest with respect to semantic usage", > Option ['?'] ["help"] (NoArg DumpHelp) > "display this help and exit", > Option ['V','v'] ["version"] (NoArg DumpVersion) -- ToDo: -v is deprecated > "output version information and exit", > Option [] ["numeric-version"] (NoArg DumpNumericVersion) -- ToDo: -v is deprecated > "output the version number and exit", Various debugging/dumping options... > Option [] ["ddump-mangle"] (NoArg DumpMangle) > "Dump mangled input", > Option [] ["ddump-lr0"] (NoArg DumpLR0) > "Dump LR0 item sets", > Option [] ["ddump-action"] (NoArg DumpAction) > "Dump action table", > Option [] ["ddump-goto"] (NoArg DumpGoto) > "Dump goto table", > Option [] ["ddump-lookaheads"] (NoArg DumpLA) > "Dump lookahead info" > ] ------------------------------------------------------------------------------ Extract various command-line options. > getOutputFileName :: String -> [CLIFlags] -> IO String > getOutputFileName ip_file cli > = case [ s | (OptOutputFile s) <- cli ] of > [] -> return (base ++ ".hs") > where (base, _ext) = break (== '.') ip_file > f:fs -> return (last (f:fs)) > getInfoFileName :: String -> [CLIFlags] -> IO (Maybe String) > getInfoFileName base cli > = case [ s | (OptInfoFile s) <- cli ] of > [] -> return Nothing > [f] -> case f of > Nothing -> return (Just (base ++ ".info")) > Just j -> return (Just j) > _many -> dieHappy "multiple --info/-i options\n" > getPrettyFileName :: String -> [CLIFlags] -> IO (Maybe String) > getPrettyFileName base cli > = case [ s | (OptPrettyFile s) <- cli ] of > [] -> return Nothing > [f] -> case f of > Nothing -> return (Just (base ++ ".grammar")) > Just j -> return (Just j) > _many -> dieHappy "multiple --pretty/-p options\n" > getTemplate :: IO String -> [CLIFlags] -> IO String > getTemplate def cli > = case [ s | (OptTemplate s) <- cli ] of > [] -> def > f:fs -> return (last (f:fs)) > getMagicName :: [CLIFlags] -> IO (Maybe String) > getMagicName cli > = case [ s | (OptMagicName s) <- cli ] of > [] -> return Nothing > f:fs -> return (Just (map toLower (last (f:fs)))) > getCoerce :: [CLIFlags] -> IO Bool > getCoerce cli = return (OptUseCoercions `elem` cli) > getStrict :: [CLIFlags] -> IO Bool > getStrict cli = return (OptStrict `elem` cli) > getDebug :: [CLIFlags] -> IO Bool > getDebug cli = return (OptDebugParser `elem` cli) ------------------------------------------------------------------------------ > projectVersion :: String > projectVersion = showVersion version > copyright :: String > copyright = unlines [ > "Happy Version " ++ showVersion version ++ " Copyright (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2005 Simon Marlow","", > "Happy is a Yacc for Haskell, and comes with ABSOLUTELY NO WARRANTY.", > "This program is free software; you can redistribute it and/or modify", > "it under the terms given in the file 'LICENSE' distributed with", > "the Happy sources."] > usageHeader :: String -> String > usageHeader prog = "Usage: " ++ prog ++ " [OPTION...] file\n" ----------------------------------------------------------------------------- happy-2.1.7/examples/0000755000000000000000000000000007346545000012651 5ustar0000000000000000happy-2.1.7/examples/Calc.ly0000644000000000000000000000666607346545000014077 0ustar0000000000000000> { > module Calc where > import Char > } First thing to declare is the name of your parser, and the type of the tokens the parser reads. > %name calc > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declare this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError :: [Token] -> a > happyError _ = error ("Parse error\n") Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. runCalc :: String -> Exp runCalc = calc . lexer Here we test our parser. main = case runCalc "1 + 2 + 3" of { (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> case runCalc "1 * 2 + 3" of { (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> case runCalc "1 + 2 * 3" of { (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> case runCalc "let x = 2 in x * (x - 2)" of { (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "AndysTest works\n" ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } quit = print "runCalc failed\n" > } happy-2.1.7/examples/DavesExample.ly0000644000000000000000000000410407346545000015574 0ustar0000000000000000Parses simple lambda expressions to combinators > { > module Parser where > import Lexer > import Convert > import PreludeGlaArray > } > %name parse > %tokentype { Token } > %token idT { Ident $$ } > numT { Number $$ } > boolT { Boolean $$ } > "(" { LeftBracket } > ")" { RightBracket } > "[" { LeftSquare } > "]" { RightSquare } > "[]" { EmptyList } > ";" { SemiColon } > ":" { Colon } > "+" { Infix "+" } > "-" { Infix "-" } > "/" { Infix "/" } > "*" { Infix "*" } > "==" { Infix "==" } > "/=" { Infix "/=" } > ">" { Infix ">" } > "<" { Infix "<" } > ">=" { Infix ">=" } > "<=" { Infix "<=" } > "=" { Builtin "=" } > "else" { Builtin "else" } > "if" { Builtin "if" } > "in" { Builtin "in" } > "let" { Builtin "let" } > "then" { Builtin "then" } > "end" { Builtin "end" } > %% > P : "let" Dec "in" B { mkLet $2 $4} > | "if" B "then" B "else" B { mkIf $2 $4 $6} > | B { $1 } > B :: { Seq (Ptr Exp) } > B : E "==" E { mkOp $1 Equ $3 } > | E "/=" E { mkOp $1 NEq $3 } > | E ">" E { mkOp $1 GT $3 } > | E "<" E { mkOp $1 LT $3 } > | E ">=" E { mkOp $1 GTE $3 } > | E "<=" E { mkOp $1 LTE $3 } > | E { $1 } > E :: { Seq (Ptr Exp) } > E : E "+" T { mkOp $1 Add $3} > | E "-" T { mkOp $1 Sub $3} > | T { $1 } > T :: { Seq (Ptr Exp) } > T : T "*" F { mkOp $1 Mul $3 } > | T "/" F { mkOp $1 Quo $3 } > | F { $1 } > F :: { Seq (Ptr Exp) } > F : "(" B ")" { $2 } > | numT { mkNum $1 } > | boolT { mkBool $1 } > | idT { newPtr (mkVar $1) } > | Apps { mkApps $1 } > Apps :: { Seq [Ptr Exp] } > Apps : F Apps { mkApp $1 $2 } > | F { mkAtom $1 } > Dec :: { (Token,Seq (Ptr Exp)) } > Dec : idT Args "=" B { ($1, mkFun $1 $2 $4) } > Args :: { [Exp] } > Args : idT Args { mkVar $1 : $2} > | { [] } > { > happyError :: Text a => a -> b > happyError x = error ("Parse error, line " ++ show x ++ "\n") > } happy-2.1.7/examples/ErlParser.ly0000644000000000000000000002314507346545000015123 0ustar0000000000000000----------------------------------------------------------------------------- $Id: ErlParser.ly,v 1.2 1997/09/24 10:11:23 simonm Exp $ Syntactic analyser for Erlang Copyright : (c) 1996 Ellemtel Telecommunications Systems Laboratories, Sweden Author : Simon Marlow ----------------------------------------------------------------------------- > { > module Parser (parse) where > import GenUtils > import Lexer > import AbsSyn > import Types > import ParseMonad.Class > } > %token > atom { T_Atom $$ } > var { T_Var $$ } > int { T_Int $$ } > float { T_Float $$ } > string { T_String $$ } > 'bor' { T_Bor } > 'bxor' { T_Bxor } > 'bsl' { T_Bsl } > 'bsr' { T_Bsr } > 'div' { T_Div } > 'rem' { T_Rem } > 'band' { T_Band } > 'bnot' { T_Bnot } > 'begin' { T_Begin } > 'end' { T_End } > 'catch' { T_Catch } > 'case' { T_Case } > 'of' { T_Of } > 'if' { T_If } > 'receive' { T_Receive } > 'after' { T_After } > 'when' { T_When } > 'fun' { T_Fun } > 'true' { T_True } > 'deftype' { T_DefType } > 'type' { T_Type } > '+' { T_Plus } > '-' { T_Minus } > '*' { T_Mult } > '/' { T_Divide } > '==' { T_Eq } > '/=' { T_Neq } > '=<' { T_Leq } > '<' { T_Lt } > '>=' { T_Geq } > '>' { T_Gt } > '=:=' { T_ExactEq } > '=/=' { T_ExactNeq } > '!' { T_Pling } > '=' { T_Equals } > '[' { T_LSquare } > ']' { T_RSquare } > '(' { T_LParen } > ')' { T_RParen } > '{' { T_LCurly } > '}' { T_RCurly } > ',' { T_Comma } > ';' { T_SemiColon } > '|' { T_Bar } > ':' { T_Colon } > '->' { T_Arrow } > '.' { T_Dot } > '\\' { T_BackSlash } > header_prog { T_Prog } > header_iface { T_Interface } > %monad { P } { thenP } { returnP } > %lexer { lexer } { T_EOF } > %name parse > %tokentype { Token } > %% > parse :: { ProgOrInterface } > : header_prog program { It's_a_prog $2 } > | header_iface interface { It's_an_iface $2 } > program :: { [Form] } > : { [] } > | form program { $1 : $2 } > add_op :: { BinOp } > : '+' { O_Add } > | '-' { O_Subtract } > | 'bor' { O_Bor } > | 'bxor' { O_Bxor } > | 'bsl' { O_Bsl } > | 'bsr' { O_Bsr } > comp_op :: { CompOp } > : '==' { O_Eq } > | '/=' { O_Neq } > | '=<' { O_Leq } > | '<' { O_Lt } > | '>=' { O_Geq } > | '>' { O_Gt } > | '=:=' { O_ExactEq } > | '=/=' { O_ExactNeq } > mult_op :: { BinOp } > : '*' { O_Multiply } > | '/' { O_Divide } > | 'div' { O_Div } > | 'rem' { O_Rem } > | 'band' { O_Band } > prefix_op :: { UnOp } > : '+' { O_Plus } > | '-' { O_Negate } > | 'bnot' { O_Bnot } > basic_type :: { Expr } > : atm { E_Atom $1 } > | int { E_Int $1 } > | float { E_Float $1 } > | string { foldr E_Cons E_Nil (map (E_Int . ord) $1) } > | var { E_Var $1 } > pattern :: { Expr } > : basic_type { $1 } > | '[' ']' { E_Nil } > | '[' pattern pat_tail ']' { E_Cons $2 $3 } > | '{' '}' { E_Tuple [] } > | '{' patterns '}' { E_Tuple $2 } > | atm '{' patterns '}' { E_Struct $1 $3 } > pat_tail :: { Expr } > : '|' pattern { $2 } > | ',' pattern pat_tail { E_Cons $2 $3 } > | { E_Nil } > patterns :: { [ Expr ] } > : pattern { [ $1 ] } > | pattern ',' patterns { $1 : $3 } > expr :: { Expr } > : 'catch' expr { E_Catch $2 } > | 'fun' '(' formal_param_list ')' '->' expr 'end' { E_Fun $3 $6 } > | 'fun' var '/' int { E_FunName (LocFun $2 $4) } > | 'fun' var ':' var '/' int { E_FunName (ExtFun $2 $4 $6) } > | expr200 { $1 } > expr200 :: { Expr } > : expr300 '=' expr { E_Match $1 $3 } > | expr300 '!' expr { E_Send $1 $3 } > | expr300 { $1 } > expr300 :: { Expr } > : expr300 add_op expr400 { E_BinOp $2 $1 $3 } > | expr400 { $1 } > expr400 :: { Expr } > : expr400 mult_op expr500 { E_BinOp $2 $1 $3 } > | expr500 { $1 } > expr500 :: { Expr } > : prefix_op expr0 { E_UnOp $1 $2 } > | expr0 { $1 } > expr0 :: { Expr } > : basic_type { $1 } > | '[' ']' { E_Nil } > | '[' expr expr_tail ']' { E_Cons $2 $3 } > | '{' maybeexprs '}' { E_Tuple $2 } > | atm '{' maybeexprs '}' { E_Struct $1 $3 } > | atm '(' maybeexprs ')' { E_Call (LocFun $1 (length $3)) $3 } > | atm ':' atm '(' maybeexprs ')' > { E_Call (ExtFun $1 $3 (length $5)) $5 } > | '(' expr ')' { $2 } > | 'begin' exprs 'end' { E_Block $2 } > | 'case' expr 'of' cr_clauses 'end' { E_Case $2 $4 } > | 'if' if_clauses 'end' { E_If $2 } > | 'receive' 'after' expr '->' exprs 'end' > { E_Receive [] (Just ($3,$5)) } > | 'receive' cr_clauses 'end' { E_Receive $2 Nothing } > | 'receive' cr_clauses 'after' expr '->' exprs 'end' > { E_Receive $2 (Just ($4,$6)) } > expr_tail :: { Expr } > : '|' expr { $2 } > | ',' expr expr_tail { E_Cons $2 $3 } > | { E_Nil } > cr_clause :: { CaseClause } > : expr clause_guard '->' exprs { ($1,$2,$4) } > clause_guard :: { [ GuardTest ] } > : 'when' guard { $2 } > | { [] } > cr_clauses :: { [ CaseClause ] } > : cr_clause { [ $1 ] } > | cr_clause ';' cr_clauses { $1 : $3 } > if_clause :: { IfClause } > : guard '->' exprs { ($1,$3) } > if_clauses :: { [ IfClause ] } > : if_clause { [ $1 ] } > | if_clause ';' if_clauses { $1 : $3 } > maybeexprs :: { [ Expr ] } > : exprs { $1 } > | { [] } > exprs :: { [ Expr ] } > : expr { [ $1 ] } > | expr ',' exprs { $1 : $3 } > guard_test :: { GuardTest } > : atm '(' maybeexprs ')' { G_Bif $1 $3 } > | expr300 comp_op expr300 { G_Cmp $2 $1 $3 } > guard_tests :: { [ GuardTest ] } > : guard_test { [ $1 ] } > | guard_test ',' guard_tests { $1 : $3 } > guard :: { [ GuardTest ] } > : 'true' { [] } > | guard_tests { $1 } > function_clause :: { FunctionClause } > : atm '(' formal_param_list ')' clause_guard '->' exprs > { (LocFun $1 (length $3),$3,$5,$7) } > formal_param_list :: { [ Expr ] } > : { [] } > | patterns { $1 } > function :: { Function } > : function_clause { [ $1 ] } > | function_clause ';' function { $1 : $3 } > attribute :: { Attribute } > : pattern { A_Pat $1 } > | '[' farity_list ']' { A_Funs $2 } > | atm ',' '[' maybe_farity_list ']' { A_AtomAndFuns $1 $4 } > maybe_farity_list :: { [ Fun ] } > : farity_list { $1 } > | { [] } > farity_list :: { [ Fun ] } > : farity { [ $1 ] } > | farity ',' farity_list { $1 : $3 } > farity :: { Fun } > : atm '/' int { LocFun $1 $3 } > form :: { Form } > : '-' atm '(' attribute ')' '.' { F_Directive $2 $4 } > | '-' 'type' sigdef '.' { $3 } > | '-' 'deftype' deftype '.' { $3 } > | function '.' { F_Function $1 } > abstype :: { Form } > : atm '(' maybetyvars ')' maybeconstraints > { F_AbsTypeDef (Tycon $1 (length $3)) $3 (snd $5) } > deftype :: { Form } > : atm '(' maybetyvars ')' '=' utype maybeconstraints > { F_TypeDef (Tycon $1 (length $3)) $3 $6 (fst $7) (snd $7) } > sigdef :: { Form } > : atm '(' maybeutypes ')' '->' utype maybeconstraints > { F_TypeSig ($1,length $3) $3 $6 (fst $7) (snd $7) } > header :: { (String,Int,[UType]) } > : atm '(' maybeutypes ')' { ($1, length $3, $3) } > tycon_args :: { [ TyVar ] } > : tycon_args ',' var { STyVar $3 : $1 } > | var { [ STyVar $1 ] } ----------------------------------------------------------------------------- Interfaces & Types > interface :: { (Module, [ Form ]) } > : '-' atm '(' atm ')' '.' signatures > { ($4, $7) } > signatures :: { [ Form ] } > : signatures typedef '.' { $2 : $1 } > | { [] } > typedef :: { Form } > typedef > : '-' 'deftype' deftype { $3 } > | '-' 'deftype' abstype { $3 } > | sigdef { $1 } > maybeconstraints :: { ([Constraint], [VarConstraint]) } > : 'when' constraints { splitConstraints $2 } > | { ([],[]) } > constraints :: { [ VarOrTypeCon ] } > : constraints ';' constraint { $1 ++ $3 } > | constraint { $1 } > constraint :: { [ VarOrTypeCon ] } > : utype '<' '=' utype { [TypeCon ($1,$4)] } > | utype '=' utype { [TypeCon ($1,$3),TypeCon($3,$1)] } > | var '\\' tags { [VarCon (STyVar $1,(canonTags $3))] } > maybeutypes :: { [ UType ] } > : utypes { reverse $1 } > | { [] } > utypes :: { [ UType ] } > : utypes ',' utype { $3 : $1 } > | utype { [$1] } > maybetyvars :: { [ TyVar ] } > : tyvars { reverse $1 } > | { [] } > tyvars :: { [ TyVar ] } > : tyvars ',' var { STyVar $3 : $1 } > | var { [ STyVar $1 ] } > utype :: { UType } > : ptypes { U (reverse $1) [] } > | ptypes '|' tyvar { U (reverse $1) [$3] } > | tyvar { U [] [$1] } > | '(' utype ')' { $2 } > | '(' ')' { U [] [] } > tyvar :: { TaggedTyVar } > : var { TyVar [] (STyVar $1) } > | int { if $1 /= 1 then > error "Illegal type variable" > else universalTyVar } > | int '\\' tags { if $1 /= 1 then > error "Illegal type variable" > else partialUniversalTyVar $3 } > ptypes :: { [ PType ] } > : ptypes '|' ptype { $3 : $1 } > | ptype { [$1] } > ptype :: { PType } > : atm '(' ')' { conToType $1 [] } > | atm '(' utypes ')' { conToType $1 (reverse $3) } > | atm { TyAtom $1 } > | '{' utypes '}' { tytuple (reverse $2) } > | atm '{' maybeutypes '}' { TyStruct $1 $3 } > | '[' utype ']' { tylist $2 } > taglist :: { [ Tag ] } > : taglist ',' tag { $3 : $1 } > | tag { [ $1 ] } > tags :: { [ Tag ] } > : tag { [ $1 ] } > | '(' taglist ')' { $2 } > tag :: { Tag } > : atm '(' ')' { conToTag $1 } > | atm { TagAtom $1 } > | atm '/' int { TagStruct $1 $3 } > | '{' int '}' { tagtuple $2 } > | '[' ']' { taglist } Horrible - keywords that can be atoms too. > atm :: { String } > : atom { $1 } > | 'true' { "true" } > | 'deftype' { "deftype" } > | 'type' { "type" } > { > utypeToVar (U [] [TyVar [] x]) = x > utypeToVar _ = error "Type constructor arguments must be variables\n" > happyError :: P a > happyError s line = failP (show line ++ ": Parse error\n") s line > } happy-2.1.7/examples/ErrorTest.ly0000644000000000000000000001044707346545000015156 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > import Data.Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> ParseOk (Let l $2 $4 $6) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > | error { Term (Factor (Int 1)) } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = String -> Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \s l -> > case m s l of > ParseFail s -> ParseFail s > ParseOk a -> k a s l > returnP :: a -> P a > returnP a = \s l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: (Token -> P a) -> P a > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: String -> Exp > runCalc s = case calc s 1 of > ParseOk e -> e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > type Parse = P Exp > calc :: Parse The next function is called when a parse error is detected. It has the same type as the top-level parse function. > happyError :: P a > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "+ 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/examples/LexerTest.ly0000644000000000000000000001015107346545000015134 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > import Data.Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> ParseOk (Let l $2 $4 $6) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = String -> Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \s l -> > case m s l of > ParseFail s -> ParseFail s > ParseOk a -> k a s l > returnP :: a -> P a > returnP a = \s l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: (Token -> P a) -> P a > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: String -> Exp > runCalc s = case calc s 1 of > ParseOk e -> e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > calc :: P Exp The next function is called when a parse error is detected. It has the same type as the top-level parse function. > happyError :: P a > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/examples/MonadTest.ly0000644000000000000000000000707207346545000015123 0ustar0000000000000000----------------------------------------------------------------------------- Tests %monad without %lexer. > { > import Data.Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves two purposes: * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \l -> > case m l of > ParseFail s -> ParseFail s > ParseOk a -> k a l > returnP :: a -> P a > returnP a = \l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest > runCalc :: String -> Exp > runCalc s = case calc (lexer s) 1 of > ParseOk e -> e > ParseFail s -> error s > happyError = \tks i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/examples/PgnParser.ly0000644000000000000000000000367007346545000015126 0ustar0000000000000000> { > module PgnParser (pgnMoveParser,pgnParser) where > import GenUtils > import OrdFM > import Board > import PgnTypes > } > %name pgnMoveParser > %tokentype { Token } > %token > str { StringToken $$ } > result { ResultToken $$ } > nag { NAGToken $$ } > tag { TagToken $$ } > comment { CommentToken $$ } > ']' { RightSBToken } > '(' { LeftRBToken } > ')' { RightRBToken } > '<' { LeftABToken } > '>' { RightABToken } > num { IntToken $$ } > '.' { PeriodToken } > move { PlyToken $$ } > %newline { NewlineToken } > %% You either parse a set of PGN games, or just a set of moves. > moves :: { AbsMove } > moves : opt_mv_num line_no move nags opt_comment analmoves opt_comment > more_moves > { AbsMove $1 $2 $3 $4 ($5++$7) $6 Nothing $8 } > | opt_mv_num line_no move nags opt_comment more_moves > { AbsMove $1 $2 $3 $4 $5 [] Nothing $6 } > | opt_mv_num line_no move '<' raw_moves '>' more_moves > { AbsMove $1 $2 $3 [] [] [] (Just $5) $7 } > more_moves :: { AbsMove } > more_moves > : moves { $1 } > | result { AbsResult $1 } > | { AbsEnd } > nags :: { [Int] } > nags : nag nags { $1 : $2 } > | { [] } > opt_mv_num :: { Maybe MoveNumber } > opt_mv_num > : num '.' '.' '.' { Just (MoveNumber $1 Black) } > | num '.' { Just (MoveNumber $1 White) } > | { Nothing } > mv_num :: { MoveNumber } > mv_num > : num '.' '.' '.' { (MoveNumber $1 Black) } > | num '.' { (MoveNumber $1 White) } > opt_comment :: { [String] } > opt_comment > : comment { $1 } > | { [] } > analmoves :: { [AbsMove] } > analmoves > : '(' moves ')' analmoves { $2 : $4 } > | '(' moves ')' { [$2] } > line_no :: { LineNo } > line_no > : { $# } > raw_moves :: { [AbsPly] } > raw_moves > : move raw_moves { $1 : $2 } > | { [] } > { > pgnParser = pgnGameMap pgnMoveParser > happyError :: Int -> [Token] -> a > happyError i xs = > error ("Parse error in line " ++ show i ++ "\n" > ++ show (take 10 xs)) > } happy-2.1.7/examples/README0000644000000000000000000000160507346545000013533 0ustar0000000000000000These are a few examples of Happy parsers, taken from various sources. The are intended as illustrations, not as working, complete examples, as some require functions and datatypes imported from other sources. Calc.ly : The calculator example from the Happy manual DavesExample.ly : Parses simple lambda expressions to combinators SimonsExample.ly : Another lambda expression parser ErlParser.ly : A parser for Erlang MonadTest.ly : Demonstrates use of %monad LexerTest.ly : Demonstrates use of %monad and %lexer ErrorTest.ly : Demonstrates use of the 'error' token There are also a few more examples under happy/tests. A full Haskell 98 parser written using Happy is available from the GHC CVS repository in fptools/hslibs/hssource. See http://www.haskell.org/ghc/ for instructions on how to access the GHC CVS repository. -- Simon Marlow happy-2.1.7/examples/SimonsExample.ly0000644000000000000000000000320307346545000016001 0ustar0000000000000000> { > module Parser (parse) where > import Type > import Lexer > } > %token > backslash { Builtin "\\" } > var { Ident $$ } > rightarrow { Builtin "->" } > caseT { Builtin "case" } > letT { Builtin "let" } > ofT { Builtin "of" } > inT { Builtin "in" } > letnT { Builtin "letn" } > leftcurly { LeftCurly } > rightcurly { RightCurly } > equals { Builtin "=" } > colon { Builtin ":" } > cons { Constructor $$ } > leftbracket { LeftBracket } > rightbracket { RightBracket } > semicolon { SemiColon } > percent { Percent } > %name parse > %tokentype { Token } > %% > expr > : backslash var binders rightarrow expr > { foldr Lambda $5 ($2: reverse $3) } > | caseT var ofT leftcurly patterns rightcurly > { Case $2 (reverse $5) } > | letT var equals var expr inT expr > { LetApp ($2,$4,$5) $7 } > | letT var equals expr inT expr > { Let ($2,$4) $6 } > | letnT var equals expr inT expr > { LetN ($2,$4) $6 } > > | labelref colon expr { Label $1 $3 } > | simpleexpr { $1 } > simpleexpr > : cons simpleexprs { Cons $1 (reverse $2) } > | simpleexpr0 { $1 } > > simpleexprs > : simpleexprs simpleexpr0 { $2 : $1 } > | { [] } > > simpleexpr0 > : var { Var $1 } > | labelref { LabelRef $1 } > | leftbracket expr rightbracket { $2 } > > patterns > : patterns pattern { $2 : $1 } > | pattern { [ $1 ] } > > pattern : cons binders rightarrow expr semicolon > { ($1, reverse $2, $4) } > > binders : binders var { $2 : $1 } > | { [ ] } > > labelref > : percent var { $2 } > { > happyError :: Int -> a > happyError x = error ("Error at LINE " ++ show x) > } happy-2.1.7/examples/glr/0000755000000000000000000000000007346545000013435 5ustar0000000000000000happy-2.1.7/examples/glr/Makefile0000644000000000000000000000035407346545000015077 0ustar0000000000000000 all : make loop CMD=run clean : make loop CMD=clean DIRS = expr-eval expr-tree expr-monad \ hidden-leftrec highly-ambiguous packing \ nlp bio-eg loop : for d in ${DIRS}; do (cd $$d && make ${CMD}) || exit 1; done happy-2.1.7/examples/glr/Makefile.defs0000644000000000000000000000034507346545000016017 0ustar0000000000000000.SUFFIXES: .y .hs .exe OPT= GHC=ghc -rtsopts -I../common -i../common -fno-warn-tabs ${OPT} # -dshow-passes HAPPY=happy FILTER = --filter FILTER = DECODE = H_OPT = .y.hs : ${HAPPY} -i -l ${DECODE} ${FILTER} ${H_OPT} $*.y happy-2.1.7/examples/glr/bio-eg/0000755000000000000000000000000007346545000014577 5ustar0000000000000000happy-2.1.7/examples/glr/bio-eg/1-1200.dna0000644000000000000000000000226107346545000016004 0ustar0000000000000000agcttttcattctgactgcaacgggcaatatgtctctgtgtggattaaaaaaagagtgtctgatagcagcttctgaactggttacctgccgtgagtaaattaaaattttattgacttaggtcactaaatactttaaccaatataggcatagcgcacagacagataaaaattacagagtacacaacatccatgaaacgcattagcaccaccattaccaccaccatcaccattaccacaggtaacggtgcgggctgacgcgtacaggaaacacagaaaaaagcccgcacctgacagtgcgggctttttttttcgaccaaaggtaacgaggtaacaaccatgcgagtgttgaagttcggcggtacatcagtggcaaatgcagaacgttttctgcgtgttgccgatattctggaaagcaatgccaggcaggggcaggtggccaccgtcctctctgcccccgccaaaatcaccaaccacctggtggcgatgattgaaaaaaccattagcggccaggatgctttacccaatatcagcgatgccgaacgtatttttgccgaacttttgacgggactcgccgccgcccagccggggttcccgctggcgcaattgaaaactttcgtcgatcaggaatttgcccaaataaaacatgtcctgcatggcattagtttgttggggcagtgcccggatagcatcaacgctgcgctgatttgccgtggcgagaaaatgtcgatcgccattatggccggcgtattagaagcgcgcggtcacaacgttactgttatcgatccggtcgaaaaactgctggcagtggggcattacctcgaatctaccgtcgatattgctgagtccacccgccgtattgcggcaagccgcattccggctgatcacatggtgctgatggcaggtttcaccgccggtaatgaaaaaggcgaactggtggtgcttggacgcaacggttccgactactctgctgcggtgctggctgcctgtttacgcgccgattgttgcgagatttggacggacgttgacggggtctatacctgcgacccgcgtcaggtgcccgatgcgaggttgttgaagtcgatgtcctaccaggaagcgatggagctttcctacttcggcgctaaagttcttcacccccgcaccattacccccatcgcccagttccagatcccttgcctgattaaaaataccggaaatcct happy-2.1.7/examples/glr/bio-eg/1-600.dna0000644000000000000000000000113107346545000015722 0ustar0000000000000000agcttttcattctgactgcaacgggcaatatgtctctgtgtggattaaaaaaagagtgtctgatagcagcttctgaactggttacctgccgtgagtaaattaaaattttattgacttaggtcactaaatactttaaccaatataggcatagcgcacagacagataaaaattacagagtacacaacatccatgaaacgcattagcaccaccattaccaccaccatcaccattaccacaggtaacggtgcgggctgacgcgtacaggaaacacagaaaaaagcccgcacctgacagtgcgggctttttttttcgaccaaaggtaacgaggtaacaaccatgcgagtgttgaagttcggcggtacatcagtggcaaatgcagaacgttttctgcgtgttgccgatattctggaaagcaatgccaggcaggggcaggtggccaccgtcctctctgcccccgccaaaatcaccaaccacctggtggcgatgattgaaaaaaccattagcggccaggatgctttacccaatatcagcgatgccgaacgtatttttgccgaacttttgacgggactcgccgccgcccagccggggttcccgctggcg happy-2.1.7/examples/glr/bio-eg/Bio.y0000644000000000000000000003201307346545000015501 0ustar0000000000000000{ -- (c) 2004 University of Durham, Julia Fischer -- Portions of the grammar are derived from work by Leung/Mellish/Robertson import Data.Char } %tokentype { Token } %token a { Base_A } c { Base_C } g { Base_G } t { Base_T } %lexer { lexer } { TokenEOF } %% M : Intergenic_noise Match Intergenic_noise {} -- replace NSkip by Intergenic_noise? Intergenic_noise : {} | Intergenic_noise N {} -- Left-assoc, less stack? Match : Promoter Translation {} Promoter :: {Int} : Promoter_consensus {1} | Promoter_hcv_large {2} | Promoter_cart {3} | Promoter_hcv_small {4} -------------------- -- HCV SMALL -------------------- -- regions [data from Leung (hvc_small.gr)] Promoter_hcv_small : N V N7_skip K B K N20_skip R N12_skip {} --mod 3 = 0 | K N B N N D N18_skip H N9_skip V N {} --mod 3 = 0 | t N20_skip N6_skip t N4_skip t N6_skip {} --mod 3 = 0 -------------------- -- CONSENSUS -------------------- -- regions [data from Leung (consensus.gr)] Promoter_consensus : Minus_35 N15_skip Minus_10 {} | Minus_35 N15_skip N1_skip Minus_10 N5_skip {} | Minus_35 N15_skip N2_skip Minus_10 N5_skip {} | Minus_35 N15_skip N3_skip Minus_10 N5_skip {} | Minus_35 N15_skip N4_skip Minus_10 N5_skip {} Minus_35 : t t g a c a {} Minus_10 : t a t a a t {} -------------------- -- HVC LARGE -------------------- -- regions [data from Leung (hvc_large.gr)] Promoter_hcv_large : H N11_skip D Y B N3_skip H N12_skip B N5_skip Y N2_skip W N4_skip {} | N D N3_skip V N1_skip B N12_skip H N2_skip B D N2_skip H N2_skip H B N4_skip W N6_skip H H {} | N H N B N D N6_skip H N4_skip K B N6_skip D B N3_skip B N4_skip V N4_skip H N2_skip D N7_skip {} | N N D N12_skip B D N2_skip V N2_skip H D N2_skip D H B N7_skip B D N5_skip H H N6_skip {} | D N D N12_skip B N5_skip H N13_skip B N H H W N6_skip H Y {} | N N D N B N D N H N3_skip D N4_skip V N2_skip H N D H N6_skip H N3_skip D N6_skip H N2_skip B N3_skip {} | D N8_skip H N1_skip H N1_skip D N4_skip H N3_skip V H N11_skip H N2_skip H N5_skip D N1_skip V N1_skip H {} | H N3_skip B N9_skip H N12_skip H D N4_skip W B N2_skip D D H N1_skip D N5_skip D H {} | V N7_skip V N2_skip D N2_skip D N6_skip B H N11_skip D D N1_skip H N1_skip H H N1_skip B N2_skip {} | D N8_skip B D D N2_skip B N6_skip H N4_skip D N5_skip D N1_skip H D N2_skip D N3_skip D D N6_skip {} | B N13_skip H N1_skip D H V N14_skip B N1_skip V N2_skip D N1_skip D V D N1_skip D N3_skip H {} | H V N4_skip B N1_skip D N6_skip D N4_skip D N4_skip H H N3_skip B N6_skip B N1_skip D N3_skip D N1_skip D N4_skip {} | W N3_skip V N9_skip D N11_skip B N1_skip D H N5_skip D H N1_skip D N1_skip H D N6_skip {} | K N2_skip D N3_skip H N1_skip H N6_skip H N2_skip B N5_skip D D N7_skip V N2_skip D N1_skip H H N7_skip {} | D N11_skip H D D N2_skip D N6_skip D N3_skip H N6_skip V N1_skip D D N2_skip H B N1_skip B N1_skip {} | H N3_skip B N1_skip H N6_skip V N1_skip B N2_skip V N2_skip D N7_skip B N8_skip H N3_skip H D N1_skip H N1_skip H N1_skip {} | B N4_skip B N12_skip H N4_skip V N2_skip H D N2_skip V H N1_skip H N2_skip H N3_skip B N1_skip K N4_skip {} | W D N7_skip B N1_skip D N2_skip D N2_skip W N1_skip D H N2_skip D N12_skip D N5_skip H {} | a N2_skip t N4_skip g N18_skip {} -------------------- -- CART -------------------- -- regions [data from Leung (cart.gr)] Promoter_cart : N N t a N N N N N N N N N N N {} | N N V a N N N t N N N N N N N {} | t N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N t B N N N t N N N N N N N t N N N N N N N {} -------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------- Translation : Start Mincodon Stop {} | Start Mincodon Codon Stop {} | Start Mincodon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Stop {} | Start Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Stop {} | Start Mincodon Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Stop {} | Start Mincodon Mincodon Mincodon Mincodon Mincodon Mincodon Stop {} --252 Basen Mincodon : Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon Codon {} --42 Basen N0_skip : {} N1_skip : N {} -- match starts one place on N2_skip : N N {} -- match starts two places on N3_skip : N N N {} -- missing an entire codon N4_skip : N N N N {} -- missing 4 bases N5_skip : N N N N N {} -- missing 5 bases N6_skip : N N N N N N {} -- missing 6 bases N7_skip : N N N N N N N {} -- missing 8 bases N8_skip : N N N N N N N N {} -- missing 7 bases N9_skip : N N N N N N N N N {} -- missing 9 bases N10_skip : N N N N N N N N N N {} -- missing 10 bases N11_skip : N10_skip N1_skip {} -- missing 11 bases N12_skip : N10_skip N2_skip {} -- missing 12 bases N13_skip : N10_skip N3_skip {} -- missing 13 bases N14_skip : N10_skip N4_skip {} -- missing 14 bases N15_skip : N10_skip N5_skip {} -- missing 15 bases N16_skip : N10_skip N6_skip {} -- missing 16 bases N17_skip : N10_skip N7_skip {} -- missing 17 bases N18_skip : N10_skip N8_skip {} -- missing 18 bases N19_skip : N10_skip N9_skip {} -- missing 19 bases N20_skip : N10_skip N10_skip {} -- missing 20 bases N30_skip : N10_skip N10_skip N10_skip {} -- missing 30 bases N40_skip : N10_skip N10_skip N10_skip N10_skip {} -- missing 40 bases N50_skip : N10_skip N10_skip N10_skip N10_skip N10_skip {} -- missing 50 bases N60_skip : N10_skip N50_skip {} -- missing 40 bases N70_skip : N10_skip N10_skip N50_skip {} -- missing 50 bases N80_skip : N10_skip N10_skip N10_skip N50_skip {} -- missing 40 bases N90_skip : N10_skip N10_skip N10_skip N10_skip N50_skip{} -- missing 50 bases N100_skip : N50_skip N50_skip {} -- Definitions of base categories according to the -- International Union of Biochemistry (IUB) -- Standard Nucleotide Codes. [Leung_data] N -- any base : a {} | c {} | g {} | t {} Y -- pyrimidin : c {} | t {} R -- purine : a {} | g {} S -- strong bonding bases : g {} | c {} W -- weak bonding bases : a {} | t {} K -- keto bases : g {} | t {} AM -- aMino bases : a {} | c {} B -- not base a : g {} | c {} | t {} D -- not base c : a {} | g {} | t {} H -- not base g : a {} | c {} | t {} V -- not base t : a {} | c {} | g {} Base : a {} | c {} | g {} | t {} -------------------- -- codons Start : a t g {} -- start codon Stop -- stop codons : t a a {} | t a g {} | t g a {} Codon -- any other codon : a a a {} | a a c {} | a a g {} | a a t {} | a c a {} | a c c {} | a c g {} | a c t {} | a g a {} | a g c {} | a g g {} | a g t {} | a t a {} | a t c {} | a t g {} | a t t {} | c a a {} | c a c {} | c a g {} | c a t {} | c c a {} | c c c {} | c c g {} | c c t {} | c g a {} | c g c {} | c g g {} | c g t {} | c t a {} | c t c {} | c t g {} | c t t {} | g a a {} | g a c {} | g a g {} | g a t {} | g c a {} | g c c {} | g c g {} | g c t {} | g g a {} | g g c {} | g g g {} | g g t {} | g t a {} | g t c {} | g t g {} | g t t {} | t a c {} | t a t {} | t c a {} | t c c {} | t c g {} | t c t {} | t g c {} | t g g {} | t g t {} | t t a {} | t t c {} | t t g {} | t t t {} -------------------- --%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% --some aux code { data Token = TokenEOF | Base_A | Base_C | Base_G | Base_T deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer ('\n':cs) = lexer cs lexer ('a':cs) = Base_A : lexer cs lexer ('c':cs) = Base_C : lexer cs lexer ('g':cs) = Base_G : lexer cs lexer ('t':cs) = Base_T : lexer cs } happy-2.1.7/examples/glr/bio-eg/Main.lhs0000644000000000000000000000411107346545000016170 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import Bio > import qualified Data.Map as Map > import Control.Monad.State #include "DV_lhs" > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > let f_ = filter_noise $ Map.toList f > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show f_) > --writeFile "full" (unlines $ map show f) > toDV (trim_graph f_ r) > ParseEOF f -> do > let f_ = filter_noise $ Map.toList f > putStrLn $ "Premature end of input:\n" > ++ unlines (map show f_) > toDV f_ > --writeFile "full" (unlines $ map show f) > ParseError ts f -> do > let f_ = filter_noise $ Map.toList f > putStrLn $ "Error: " ++ show ts > toDV f_ > --writeFile "full" (unlines $ map show f) > forest_lookup f i > = fromJust $ Map.lookup i f --- remove intergenic things, to make graph small enough for drawing -- (prefer to do this with filtering in parser...) > filter_noise f > = [ (i, map filter_branch bs) > | (i@(s_i,e_i,l), bs) <- f, not_igs i ] > where > igs = Map.fromList [ (i,False) | i@(_,_,G_Intergenic_noise) <- map fst f ] > not_igs i = Map.findWithDefault True i igs > filter_branch (Branch s ns) = Branch s [ n | n <- ns, not_igs n ] > trim_graph :: NodeMap -> RootNode -> NodeMap > trim_graph f r > = [ (i,n) | (i,n) <- f, Map.findWithDefault False i wanted ] > where > table = Map.fromList f > wanted = snd $ runState (follow r) Map.empty > follow :: ForestId -> State (Map.Map ForestId Bool) () > follow i = do > visited <- get > if Map.findWithDefault False i visited > then return () > else do > case Map.lookup i table of > Nothing > -> error $ "bad node: " ++ show i > Just bs > -> do > modify (\s -> Map.insert i True s) > mapM_ follow $ concatMap b_nodes bs happy-2.1.7/examples/glr/bio-eg/Makefile0000644000000000000000000000160107346545000016235 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs PROG=bio-eg # filtering causes this example to fail... FILTER = --filter FILTER = .hi.o : @ dummy ${PROG} : Bio.o Main.lhs ${GHC} -cpp -fglasgow-exts -o ${PROG} --make Main.lhs BioData.hs Bio.hs : Bio.y ${HAPPY} --info --glr --ghc ${FILTER} $< Bio.o : Bio.hs BioData.hi ${GHC} -cpp -fglasgow-exts -O2 -c Bio.hs DATA_FLAGS = -funfolding-use-threshold0 -fno-strictness BioData.hi BioData.o : BioData.hs @echo "Making BioData.hs WITHOUT optimisation (for speed)" ${GHC} -cpp -fglasgow-exts ${DATA_FLAGS} -c $< run : run12 run6 : ${PROG} ./${PROG} +RTS -s -K5M -RTS `cat 1-600.dna` run12 : ${PROG} rm -f out.1200 ./${PROG} +RTS -s -K15M -RTS `cat 1-1200.dna` > out.1200 2>&1 echo Expect NINE matches, got `grep '^[(,0-9]*G_Match' out.1200 | wc -l` clean : rm -rf ${PROG} Bio.info Bio.hs BioData.hs *.o *.hi out.daVinci \ out.1200 out.600 happy-2.1.7/examples/glr/bio-eg/README0000644000000000000000000000133407346545000015460 0ustar0000000000000000 A more complex example - looking for patterns in DNA sequences. This example derived from undergraduate project work by Julia Fischer at the University of Durham. Some of the grammar is based on the ones developed by Siu-wai Leung, Chris Mellish, and Dave Robertson at the University of Edinburgh. (Contact Paul Callaghan for details, and see the accompanying paper.) Files 1-600.dna and 1-1200.dna contain 600 (1200) bases from the sequence for E. coli. The first sequence parses in a few minutes, the second takes a bit longer. There are issues about how to efficiently skip over "noise" in the data. This would make the parser faster. Use "make run" to parse the 1-1200.dna sequence - it should take under 15 seconds happy-2.1.7/examples/glr/common/0000755000000000000000000000000007346545000014725 5ustar0000000000000000happy-2.1.7/examples/glr/common/DV_lhs0000644000000000000000000000351207346545000016030 0ustar0000000000000000 import DaVinciTypes hiding (Edge(..) , Node(..)) import qualified DaVinciTypes (Edge(..) , Node(..)) -- toDV :: NodeMap -> IO () toDV nodes = writeFile "out.daVinci" (show $ map g2n nodes) -- show_gsymbol (HappyTok x) = show x show_gsymbol t = show t g2n (n@(s,e,x), []) = mk_rhombus id (show_gsymbol x ++ show (s,e)) [] where id = show n g2n (n@(s,e,x), [Branch _ bs]) = mk_box id (show_gsymbol x ++ show (s,e)) $ [ DaVinciTypes.R (NodeId $ show j) | j <- bs ] where id = show n g2n (n@(s,e,x), bss) = mk_circle id (show_gsymbol x ++ show (s,e)) $ [ mk_box (id ++ "." ++ show i) (show_gsymbol x ++ show (s,e)) [ DaVinciTypes.R (NodeId $ show j) | j <- js ] | (i,Branch _ js) <- zip [0..] bss ] where id = show n --- mk_box = mk_node box_t mk_circle = mk_node circle_t mk_plain = mk_node text_t mk_rhombus = mk_node rhombus_t mk_node :: Attribute -> String -> String -> [DaVinciTypes.Node] -> DaVinciTypes.Node mk_node a id nm ts = DaVinciTypes.N (NodeId id) (Type "") [a,text nm] $ [ (mk_edge id n) t | (n,t) <- zip [1..] ts ] mk_edge id child_no t@(DaVinciTypes.R (NodeId id2)) = DaVinciTypes.E (EdgeId eId) (Type "") [] t where eId = concat [id,":",id2,"(",show child_no,")"] mk_edge id child_no t@(DaVinciTypes.N (NodeId id2) _ _ _) = DaVinciTypes.E (EdgeId eId) (Type "") [] t where eId = concat [id,":",id2,"(",show child_no,")"] --- nodeStyle = A "_GO" box_t, circle_t, ellipse_t, rhombus_t, text_t, icon_t :: Attribute box_t = nodeStyle "box" circle_t = nodeStyle "circle" ellipse_t = nodeStyle "ellipse" rhombus_t = nodeStyle "rhombus" text_t = nodeStyle "text" icon_t = nodeStyle "icon" text :: String -> Attribute text = A "OBJECT" happy-2.1.7/examples/glr/common/DaVinciTypes.hs0000644000000000000000000011426607346545000017635 0ustar0000000000000000----------------------------------------------------------------------------------------- -- Haskell binding for daVinci API -- -- Original version: Sven Panne 1997/99 -- Adapted to daVinci 2.1: Tim Geisler May 1998 -- marked all extensions with '(V2.1 API)' ----------------------------------------------------------------------------------------- -- Some changes to names from daVinci API: -- foo_bar => FooBar -- baz => DVBaz in case of name collision -- foo x and foo => Foo (Maybe x) -- -- Note: There are some exceptions to the above rules (but I can't remember... ;-) module DaVinciTypes( DaVinciCmd(..), GraphCmd(..), MultiCmd(..), MenuCmd(..), FileMenuCmd(..), ViewMenuCmd(..), NavigationMenuCmd(..), AbstractionMenuCmd(..), LayoutMenuCmd(..), AppMenuCmd(..), SetCmd(..), WindowCmd(..), TclCmd(..), SpecialCmd(..), VisualCmd(..), DragAndDropCmd(..), -- (V2.1 API) DaVinciAnswer(..), Node(..), Edge(..), Attribute(..), NodeUpdate(..), EdgeUpdate(..), AttrChange(..), MixedUpdate(..), TypeChange(..), -- (V2.1 API) MenuEntry(..), IconEntry(..), VisualRule(..), -- (V2.1 API) NodeId(..), EdgeId(..), MenuId(..), MenuLabel(..), MenuMne(..), MenuAcc(..), IconId(..), Type(..), Filename(..), ContextId(..), WindowId(..), -- (V2.1 API) Orient(..), Direction(..), Btype(..), MenuMod(..) ) where --- API commands ---------------------------------------------------------- data DaVinciCmd = -- Commands of the API (top-level). Graph GraphCmd -- Graph category | Multi MultiCmd -- Multi category | Menu MenuCmd -- Menu category | AppMenu AppMenuCmd -- AppMenu category | DVSet SetCmd -- Set category | Window WindowCmd -- Window category | Tcl TclCmd -- Tcl category | Special SpecialCmd -- Special category | DVNothing -- No operation, for syncronization. | Visual VisualCmd -- Visual category (V2.1 API) | DragAndDrop DragAndDropCmd -- Drag and Drop category (V2.1 API) deriving Eq data GraphCmd = -- Send and update graphs New [Node] -- Send new graph | NewPlaced [Node] -- Dito, better layout | Update [NodeUpdate] [EdgeUpdate] -- Send graph updates | ChangeAttr [AttrChange] -- Change attributes | UpdateAndChangeAttr [NodeUpdate] [EdgeUpdate] [AttrChange] -- Combination of both | UpdateMixed [MixedUpdate] -- Send mixed graph updates (V2.1 API) | UpdateAndChangeAttrMixed [MixedUpdate] [AttrChange] -- Combination of both (V2.1 API) | ChangeType [TypeChange] -- Change types (V2.1 API) deriving Eq data MultiCmd = -- For multi-graph mode NewContext -- Open graph context | OpenContext ContextId -- Dito, but ID is given | SetContext ContextId -- Switch to context | SetContextWindow ContextId WindowId -- switch to context and window (V2.1 API) deriving Eq data MenuCmd = -- Call functions of menu File FileMenuCmd -- File menu category | View ViewMenuCmd -- View menu category | Navigation NavigationMenuCmd -- Navigation menu category | Abstraction AbstractionMenuCmd -- Abstraction menu category | Layout LayoutMenuCmd -- Layout menu category deriving Eq data FileMenuCmd = -- File menu functions ClearGraph -- Clear graph. | OpenGraph Filename -- Load graph from file | OpenGraphPlaced Filename -- Dito, better layout | OpenStatus Filename -- Load status from file | SaveGraph Filename -- Save graph as term | SaveStatus Filename -- Save graph as status | Print (Maybe Filename) -- Save as PostScript | Close -- Close graph window | Exit -- Exit daVinci deriving Eq data ViewMenuCmd = -- View menu functions OpenNewView -- Open additional view | OpenSurveyView -- Open survey view | FullScale -- Set scale to 100% | FitScaleToWindow -- Set scale to fit | Scale (Maybe Int) -- Set scale to Int | GraphInfo -- Open Graph Info dialog | DaVinciInfo -- Open daVinci Info dialog deriving Eq data NavigationMenuCmd = -- Navigation menu functions SelectParents [NodeId] -- Select parents of nodes | SelectSiblings [NodeId] -- Select siblings of nodes | SelectChilds [NodeId] -- Select childs of nodes | SelectChildren [NodeId] -- Select childs of nodes (V2.1 API) | Navigator (Maybe (NodeId,Direction,Bool)) -- Navigate in graph | Find (Maybe (String,Bool,Bool)) -- Find a node deriving Eq data AbstractionMenuCmd = -- Abstraction menu functions HideSubgraph [NodeId] -- Hide subgraphs of nodes | ShowSubgraph [NodeId] -- Show subgraphs of nodes | RestoreAllSubgraphs -- Show all hidden subgr | HideEdges [NodeId] -- Hide edges of nodes | ShowEdges [NodeId] -- Show edges of nodes | RestoreAllEdges -- Show all hidden edges deriving Eq data LayoutMenuCmd = -- Layout menu functions ImproveAll -- Start layout algorithm | ImproveVisible -- Dito, only visible nodes | CompactAll -- Compact graph layout | Orientation Orient -- Switch orientation deriving Eq data AppMenuCmd = -- Create menus/icons CreateMenus [MenuEntry] -- Add menus in Edit | CreateIcons [IconEntry] -- Add icons in icon-bar | ActivateMenus [MenuId] -- Enable menus | ActivateIcons [IconId] -- Enable icons | ControlFileEvents -- Get events of File menu deriving Eq data SetCmd = -- Set options LayoutAccuracy Int -- Layout algorithm params | KeepNodesAtLevels Bool -- Keep nodes at levels | FontSize Int -- Node font size | GapWidth Int -- Min. node distance | GapHeight Int -- Min. level distance | MultiEdgeGap Int -- Distance for multi-edges | SelfEdgeRadius Int -- Distance for self-edges | ScrollingOnSelection Bool -- Auto focusing node | AnimationSpeed Int -- Speed of animation | NoCache Bool -- Control pixmap caching. Details | RulesFirst Bool -- Should rules overlap attributes? (V2.1 API) deriving Eq data WindowCmd = -- Control windows Title String -- Set window title | ShowMessage String -- Left footer message | ShowStatus String -- Right footer message | Position Int Int -- Window origin x/y | Size Int Int -- Window width/height | Raise -- Raise window | Iconify -- Iconify window | Deiconify -- Deiconify window | Activate -- Enable interaction | Deactivate -- Disable interaction | FileBrowser Bool String String String String [Btype] Bool -- Show file browser deriving Eq data TclCmd = -- Tcl/Tk interface DVEval String -- Eval Tcl/Tk script | EvalFile Filename -- Dito, from file deriving Eq data SpecialCmd = -- Special commands SelectNodes [NodeId] -- Select specified nodes | SelectEdge EdgeId -- Select specified edge | FocusNode NodeId -- Scroll to specified node | FocusNodeAnimated NodeId -- Dito, with animation | ShowUrl String -- Display HTML-page deriving Eq data VisualCmd = -- Visual commands (V2.1 API) NewRules [VisualRule] -- Specify new rules | AddRules [VisualRule] -- Add rules or exchange existing ones deriving Eq data DragAndDropCmd = -- Drag and Drop commands (V2.1 API) DraggingOn -- Switch dragging on | DragAndDropOn -- Switch drag&drop on | DraggingOff -- Switch drag* off | NewNodeAtCoord NodeUpdate -- Insert at coordinate | NewEdgeAndNodeAtCoord NodeUpdate EdgeUpdate -- Dito, plus edge where node is the child deriving Eq --- API Answers ----------------------------------------------------------- data DaVinciAnswer = -- Answers from the API Ok -- Positive confirmer | CommunicationError String -- Negative confirmer | NodeSelectionsLabels [NodeId] -- Labels of sel. nodes | NodeDoubleClick -- Sel. node double-clicked | EdgeSelectionLabel EdgeId -- Label of sel. edge | EdgeSelectionLabels NodeId NodeId -- Dito, parent/child | EdgeDoubleClick -- Sel. edge double-clicked | MenuSelection MenuId -- ID of selected menu | IconSelection IconId -- ID of selected icon | Context ContextId -- Other context (graph) | TclAnswer String -- Answer from Tcl script | BrowserAnswer String String -- File browser result | Disconnect -- Termination request | Closed -- Context (graph) closed | Quit -- daVinci terminated | PopupSelectionNode NodeId MenuId -- Pop-up menu selected. (V2.1 API) | PopupSelectionEdge EdgeId MenuId -- Pop-up menu selected (V2.1 API) | CreateNode -- Dragging answer (V2.1 API) | CreateNodeAndEdge NodeId -- Parent ID of new edge (V2.1 API) | CreateEdge NodeId NodeId -- Node IDs of new edge (V2.1 API) | DropNode ContextId WindowId NodeId ContextId WindowId NodeId -- Node A dropped on B (V2.1 API) | ContextWindow ContextId WindowId -- Context ID + window ID (V2.1 API) | OpenWindow -- New window opened (V2.1 API) | CloseWindow WindowId -- Window closed (V2.1 API) deriving Eq --- Term Representation for Graphs ---------------------------------------- data Node = N NodeId Type [Attribute] [Edge] -- Node with ID/type/attr/childs | R NodeId -- Reference to a node deriving Eq data Edge = E EdgeId Type [Attribute] Node -- Edges with ID/type/attr/child deriving Eq data Attribute = A String String -- regular node/edge attributes (key/val) | M [MenuEntry] -- pop-up menu for node/edge (V2.1 API) deriving Eq --- Graph Updates --------------------------------------------------------- data NodeUpdate = -- Delete or remove nodes DeleteNode NodeId | NewNode NodeId Type [Attribute] deriving Eq data EdgeUpdate = -- Delete or remove edges DeleteEdge EdgeId | NewEdge EdgeId Type [Attribute] NodeId NodeId | NewEdgeBehind EdgeId EdgeId Type [Attribute] NodeId NodeId deriving Eq data MixedUpdate = -- Node or Edge update (V2.1) NU NodeUpdate -- wrapper needed in Haskell | EU EdgeUpdate -- wrapper needed in Haskell deriving Eq data AttrChange = -- Change attributes Node NodeId [Attribute] | Edge EdgeId [Attribute] deriving Eq data TypeChange = -- Change types (V2.1 API) NodeType NodeId Type -- Label, type | EdgeType EdgeId Type -- Label, type deriving Eq --- Application Menus and Icons ------------------------------------------- data MenuEntry = -- Create Menus MenuEntry MenuId MenuLabel | MenuEntryMne MenuId MenuLabel MenuMne MenuMod MenuAcc | SubmenuEntry MenuId MenuLabel [MenuEntry] | SubmenuEntryMne MenuId MenuLabel [MenuEntry] MenuMne | BlankMenuEntry | MenuEntryDisabled MenuId MenuLabel -- (V2.1 API) | SubmenuEntryDisabled MenuId MenuLabel [MenuEntry] -- (V2.1 API) deriving Eq data IconEntry = -- Create Icons IconEntry IconId Filename String | BlankIconEntry deriving Eq --- Visualization Rules (V2.1 API) --------------------------------------- data VisualRule = -- (V2.1 API) NR Type [Attribute] -- Rules for all nodes of given type | ER Type [Attribute] -- Rules for all edges of given type deriving Eq --- String Sorts ---------------------------------------------------------- newtype NodeId = NodeId String deriving Eq -- Unique node ID newtype EdgeId = EdgeId String deriving Eq -- Unique edge ID newtype MenuId = MenuId String deriving Eq -- Unique menu ID newtype MenuLabel = MenuLabel String deriving Eq -- Text of menu entry newtype MenuMne = MenuMne String deriving Eq -- Motif mnemonic char newtype MenuAcc = MenuAcc String deriving Eq -- Motif accelerator key newtype IconId = IconId String deriving Eq -- Unique icon ID newtype Type = Type String deriving Eq -- Arbitrary type newtype Filename = Filename String deriving Eq -- Valid Filename newtype ContextId = ContextId String deriving Eq -- Context ID newtype WindowId = WindowId String deriving Eq -- Window ID (V2.1 API) --- Basic Sorts ----------------------------------------------------------- data Orient = TopDown | BottomUp | LeftRight | RightLeft deriving Eq data Direction = Up | Down | DVLeft | DVRight deriving Eq data Btype = Bt String String String deriving Eq -- Text, pattern and title postfix data MenuMod = Alternate | Shift | Control | Meta | None deriving Eq -- Motif modifier key --------------------------------------------------------------------------- -- Show instances for daVinci API commands -- -- Everything would be *much* easier if daVinci allowed spaces in commands... instance Show DaVinciCmd where showsPrec _ (Graph graphCmd) = showFunc1 "graph" graphCmd showsPrec _ (Multi multiCmd) = showFunc1 "multi" multiCmd showsPrec _ (Menu menuCmd) = showFunc1 "menu" menuCmd showsPrec _ (AppMenu appMenuCmd) = showFunc1 "app_menu" appMenuCmd showsPrec _ (DVSet setCmd) = showFunc1 "set" setCmd showsPrec _ (Window windowCmd) = showFunc1 "window" windowCmd showsPrec _ (Tcl tclCmd) = showFunc1 "tcl" tclCmd showsPrec _ (Special specialCmd) = showFunc1 "special" specialCmd showsPrec _ DVNothing = showString "nothing" showsPrec _ (Visual visualCmd) = showFunc1 "visual" visualCmd showsPrec _ (DragAndDrop dragAndDropCmd) = showFunc1 "visual" dragAndDropCmd instance Show GraphCmd where showsPrec _ (New nodes) = showFunc1 "new" nodes showsPrec _ (NewPlaced nodes) = showFunc1 "new_placed" nodes showsPrec _ (Update nUpds eUpds) = showFunc2 "update" nUpds eUpds showsPrec _ (ChangeAttr aChs) = showFunc1 "change_attr" aChs showsPrec _ (UpdateAndChangeAttr nUpds eUpds aChs) = showFunc3 "update_and_change_attr" nUpds eUpds aChs showsPrec _ (UpdateMixed mUpds) = showFunc1 "update" mUpds showsPrec _ (UpdateAndChangeAttrMixed mUpds aChs)= showFunc2 "update_and_change_attr" mUpds aChs showsPrec _ (ChangeType tChs) = showFunc1 "change_type" tChs instance Show MultiCmd where showsPrec _ NewContext = showString "new_context" showsPrec _ (OpenContext contextId) = showFunc1 "open_context" contextId showsPrec _ (SetContext contextId) = showFunc1 "set_context" contextId showsPrec _ (SetContextWindow contextId windowId)= showFunc2 "set_context" contextId windowId instance Show MenuCmd where showsPrec _ (File fCmd) = showFunc1 "file" fCmd showsPrec _ (View vCmd) = showFunc1 "view" vCmd showsPrec _ (Navigation nCmd) = showFunc1 "navigation" nCmd showsPrec _ (Abstraction aCmd) = showFunc1 "abstraction" aCmd showsPrec _ (Layout lCmd) = showFunc1 "layout" lCmd instance Show FileMenuCmd where showsPrec _ ClearGraph = showString "new" showsPrec _ (OpenGraph fname) = showFunc1 "open_graph" fname showsPrec _ (OpenGraphPlaced fname) = showFunc1 "open_graph_placed" fname showsPrec _ (OpenStatus fname) = showFunc1 "open_status" fname showsPrec _ (SaveGraph fname) = showFunc1 "save_graph" fname showsPrec _ (SaveStatus fname) = showFunc1 "save_status" fname showsPrec _ (Print Nothing) = showString "print" showsPrec _ (Print (Just fname)) = showFunc1 "print" fname showsPrec _ Close = showString "close" showsPrec _ Exit = showString "exit" instance Show ViewMenuCmd where showsPrec _ OpenNewView = showString "open_new_view" showsPrec _ OpenSurveyView = showString "open_survey_view" showsPrec _ FullScale = showString "full_scale" showsPrec _ FitScaleToWindow = showString "fit_scale_to_window" showsPrec _ (Scale Nothing) = showString "scale" showsPrec _ (Scale (Just scale)) = showFunc1 "scale" scale showsPrec _ GraphInfo = showString "graph_info" showsPrec _ DaVinciInfo = showString "daVinci_info" instance Show NavigationMenuCmd where showsPrec _ (SelectParents nodeIds) = showFunc1 "select_parents" nodeIds showsPrec _ (SelectSiblings nodeIds) = showFunc1 "select_siblings" nodeIds -- TODO: change 'childs' to 'children'. But then it's no longer V2.0.x compatible ... showsPrec _ (SelectChilds nodeIds) = showFunc1 "select_childs" nodeIds showsPrec _ (SelectChildren nodeIds) = showFunc1 "select_childs" nodeIds showsPrec _ (Navigator Nothing) = showString "navigator" showsPrec _ (Navigator (Just (nodeId,dir,flag))) = showFunc3 "navigator" nodeId dir flag showsPrec _ (Find Nothing) = showString "find" showsPrec _ (Find (Just (txt,cas,exact))) = showFunc3 "find" txt cas exact instance Show AbstractionMenuCmd where showsPrec _ (HideSubgraph nodeIds) = showFunc1 "hide_subgraph" nodeIds showsPrec _ (ShowSubgraph nodeIds) = showFunc1 "show_subgraph" nodeIds showsPrec _ RestoreAllSubgraphs = showString "restore_all_subgraphs" showsPrec _ (HideEdges nodeIds) = showFunc1 "hide_edges" nodeIds showsPrec _ (ShowEdges nodeIds) = showFunc1 "show_edges" nodeIds showsPrec _ RestoreAllEdges = showString "restore_all_edges" instance Show LayoutMenuCmd where showsPrec _ ImproveAll = showString "improve_all" showsPrec _ ImproveVisible = showString "improve_visible" showsPrec _ CompactAll = showString "compact_all" showsPrec _ (Orientation orient) = showFunc1 "orientation" orient instance Show AppMenuCmd where showsPrec _ (CreateMenus menuEntries) = showFunc1 "create_menus" menuEntries showsPrec _ (CreateIcons iconEntries) = showFunc1 "create_icons" iconEntries showsPrec _ (ActivateMenus menuIds) = showFunc1 "activate_menus" menuIds showsPrec _ (ActivateIcons iconIds) = showFunc1 "activate_icons" iconIds showsPrec _ ControlFileEvents = showString "control_file_events" instance Show SetCmd where showsPrec _ (LayoutAccuracy x) = showFunc1 "layout_accuracy" x showsPrec _ (KeepNodesAtLevels x) = showBoolFunc "keep_nodes_at_levels" x showsPrec _ (FontSize x) = showFunc1 "font_size" x showsPrec _ (GapWidth x) = showFunc1 "gap_width" x showsPrec _ (GapHeight x) = showFunc1 "gap_height" x showsPrec _ (MultiEdgeGap x) = showFunc1 "multi_edge_gap" x showsPrec _ (SelfEdgeRadius x) = showFunc1 "self_edge_radius" x showsPrec _ (ScrollingOnSelection x) = showBoolFunc "scrolling_on_selection" x showsPrec _ (AnimationSpeed x) = showFunc1 "animation_speed" x showsPrec _ (NoCache x) = showBoolFunc "no_cache" x showsPrec _ (RulesFirst x) = showBoolFunc "rules_first" x instance Show WindowCmd where showsPrec _ (Title str) = showFunc1 "title" str showsPrec _ (ShowMessage str) = showFunc1 "show_message" str showsPrec _ (ShowStatus str) = showFunc1 "show_status" str showsPrec _ (Position x y) = showFunc2 "position" x y showsPrec _ (Size w h) = showFunc2 "size" w h showsPrec _ Raise = showString "raise" showsPrec _ Iconify = showString "iconify" showsPrec _ Deiconify = showString "deiconify" showsPrec _ Activate = showString "activate" showsPrec _ Deactivate = showString "deactivate" showsPrec _ (FileBrowser open title btn dir file tps hid) = showFunc7 "file_browser" open title btn dir file tps hid instance Show TclCmd where showsPrec _ (DVEval str) = showFunc1 "eval" str showsPrec _ (EvalFile fname) = showFunc1 "eval_file" fname instance Show SpecialCmd where showsPrec _ (SelectNodes nodes) = showFunc1 "select_nodes" nodes showsPrec _ (SelectEdge edges) = showFunc1 "select_edges" edges showsPrec _ (FocusNode nodeIds) = showFunc1 "focus_node" nodeIds showsPrec _ (FocusNodeAnimated nodeIds) = showFunc1 "focus_node_animated" nodeIds showsPrec _ (ShowUrl url) = showFunc1 "show_url" url instance Show VisualCmd where showsPrec _ (NewRules visualRules) = showFunc1 "new_rules" visualRules showsPrec _ (AddRules visualRules) = showFunc1 "add_rules" visualRules instance Show DragAndDropCmd where showsPrec _ DraggingOn = showString "dragging_on" showsPrec _ DragAndDropOn = showString "drag_and_drop_on" showsPrec _ DraggingOff = showString "dragging_off" showsPrec _ (NewNodeAtCoord nUpd) = showFunc1 "new_node_at_coord" nUpd showsPrec _ (NewEdgeAndNodeAtCoord nUpd eUpd) = showFunc2 "new_edge_and_node_at_coord" nUpd eUpd --------------------------------------------------------------------------- instance Show DaVinciAnswer where showsPrec _ Ok = showString "ok" showsPrec _ (CommunicationError msg) = showFunc1 "communication_error" msg showsPrec _ (NodeSelectionsLabels nodeIds) = showFunc1 "node_selections_labels" nodeIds showsPrec _ NodeDoubleClick = showString "node_double_click" showsPrec _ (EdgeSelectionLabel edgeId) = showFunc1 "edge_selection_label" edgeId showsPrec _ (EdgeSelectionLabels parent child) = showFunc2 "edge_selection_labels" parent child showsPrec _ EdgeDoubleClick = showString "edge_double_click" showsPrec _ (MenuSelection menuId) = showFunc1 "menu_selection" menuId showsPrec _ (IconSelection iconId) = showFunc1 "icon_selection" iconId showsPrec _ (Context contextId) = showFunc1 "context" contextId showsPrec _ (TclAnswer retVal) = showFunc1 "tcl_answer" retVal showsPrec _ (BrowserAnswer file typ) = showFunc2 "browser_answer" file typ showsPrec _ Disconnect = showString "disconnect" showsPrec _ Closed = showString "closed" showsPrec _ Quit = showString "quit" showsPrec _ (PopupSelectionNode nId mId) = showFunc2 "popup_selection_node" nId mId showsPrec _ (PopupSelectionEdge eId mId) = showFunc2 "popup_selection_edge" eId mId showsPrec _ CreateNode = showString "create_node" showsPrec _ (CreateNodeAndEdge nId) = showFunc1 "create_node_and_edge" nId showsPrec _ (CreateEdge nId1 nId2) = showFunc2 "create_edge" nId1 nId2 showsPrec _ (DropNode cId1 wId1 nId1 cId2 wId2 nId2) = showFunc6 "drop_node" cId1 wId1 nId1 cId2 wId2 nId2 showsPrec _ (ContextWindow cId wId) = showFunc2 "context_window" cId wId showsPrec _ OpenWindow = showString "open_window" showsPrec _ (CloseWindow wId) = showFunc1 "close_window" wId instance Read DaVinciAnswer where readsPrec _ r = [ (Ok, s) | ("ok", s) <- lexR ] ++ [ (CommunicationError m, t) | ("communication_error", s) <- lexR , ([m], t) <- readArgs s ] ++ [ (NodeSelectionsLabels (map NodeId n), t) | ("node_selections_labels", s) <- lexR , (n, t) <- readStrs s ] ++ [ (NodeDoubleClick, s) | ("node_double_click", s) <- lexR ] ++ [ (EdgeSelectionLabel (EdgeId e), t) | ("edge_selection_label", s) <- lexR , ([e], t) <- readArgs s ] ++ [ (EdgeSelectionLabels (NodeId p) (NodeId c), t) | ("edge_selection_labels", s) <- lexR , ([p,c], t) <- readArgs s ] ++ [ (EdgeDoubleClick, s) | ("edge_double_click", s) <- lexR ] ++ [ (MenuSelection (MenuId m), t) | ("menu_selection", s) <- lexR , ([m], t) <- readArgs s ] ++ [ (IconSelection (IconId i), t) | ("icon_selection", s) <- lexR , ([i], t) <- readArgs s ] ++ [ (Context (ContextId c), t) | ("context", s) <- lexR , ([c], t) <- readArgs s ] ++ [ (TclAnswer a, t) | ("tcl_answer", s) <- lexR , ([a], t) <- readArgs s ] ++ [ (BrowserAnswer f y, t) | ("browser_answer", s) <- lexR , ([f,y], t) <- readArgs s ] ++ [ (Disconnect, s) | ("disconnect", s) <- lexR ] ++ [ (Closed, s) | ("closed", s) <- lexR ] ++ [ (Quit, s) | ("quit", s) <- lexR ] ++ [ (PopupSelectionNode (NodeId n) (MenuId m), t) | ("popup_selection_node", s) <- lexR , ([n,m], t) <- readArgs s ] ++ [ (PopupSelectionEdge (EdgeId e) (MenuId m), t) | ("popup_selection_edge", s) <- lexR , ([e,m], t) <- readArgs s ] ++ [ (CreateNode, s) | ("create_node", s) <- lexR ] ++ [ (CreateNodeAndEdge (NodeId n), t) | ("create_node_and_edge", s) <- lexR , ([n], t) <- readArgs s ] ++ [ (CreateEdge (NodeId n1) (NodeId n2), t) | ("create_edge", s) <- lexR , ([n1, n2], t) <- readArgs s ] ++ [ (DropNode (ContextId c1) (WindowId w1) (NodeId n1) (ContextId c2) (WindowId w2) (NodeId n2), t) | ("drop_node", s) <- lexR , ([c1,w1,n1,c2,w2,n2], t) <- readArgs s ] ++ [ (ContextWindow (ContextId c) (WindowId w), t)| ("context_window", s) <- lexR , ([c,w], t) <- readArgs s ] ++ [ (OpenWindow, s) | ("open_window", s) <- lexR ] ++ [ (CloseWindow (WindowId w), t) | ("close_window", s) <- lexR , ([w], t) <- readArgs s ] where lexR = lex r readArgs :: ReadS [String] readArgs s = [ (x:xs, v) | ("(", t) <- lex s, (x, u) <- reads t, (xs, v) <- readArgs2 u ] readArgs2 :: ReadS [String] readArgs2 s = [ ([], t) | (")",t) <- lex s ] ++ [ (x:xs, v) | (",",t) <- lex s, (x, u) <- reads t, (xs, v) <- readArgs2 u ] readStrs :: ReadS [String] readStrs = reads --------------------------------------------------------------------------- instance Show Node where showsPrec _ (N nodeId typ attrs edges) = showLabeled nodeId (showFunc3 "n" typ attrs edges) showsPrec _ (R nodeId) = showFunc1 "r" nodeId showList = showLst instance Show Edge where showsPrec _ (E edgeId typ attrs node) = showLabeled edgeId (showFunc3 "e" typ attrs node) showList = showLst instance Show Attribute where showsPrec _ (A key value) = showFunc2 "a" key value showsPrec _ (M menuEntries) = showFunc1 "m" menuEntries showList = showLst instance Show NodeUpdate where showsPrec _ (DeleteNode nodeId) = showFunc1 "delete_node" nodeId showsPrec _ (NewNode nodeId typ attrs) = showFunc3 "new_node" nodeId typ attrs showList = showLst instance Show EdgeUpdate where showsPrec _ (DeleteEdge edgeId) = showFunc1 "delete_edge" edgeId showsPrec _ (NewEdge edgeId typ attrs nodeId1 nodeId2) = showFunc5 "new_edge" edgeId typ attrs nodeId1 nodeId2 showsPrec _ (NewEdgeBehind edgeId1 edgeId2 typ attrs nodeId1 nodeId2) = showFunc6 "new_edge_behind" edgeId1 edgeId2 typ attrs nodeId1 nodeId2 showList = showLst instance Show MixedUpdate where showsPrec _ (NU nUpd) = shows nUpd showsPrec _ (EU eUpd) = shows eUpd showList = showLst instance Show AttrChange where showsPrec _ (Node nodeId attrs) = showFunc2 "node" nodeId attrs showsPrec _ (Edge edgeId attrs) = showFunc2 "edge" edgeId attrs showList = showLst instance Show TypeChange where showsPrec _ (NodeType nodeId typ) = showFunc2 "node" nodeId typ showsPrec _ (EdgeType edgeId typ) = showFunc2 "edge" edgeId typ showList = showLst --------------------------------------------------------------------------- instance Show MenuEntry where showsPrec _ (MenuEntry menuId menuLabel) = showFunc2 "menu_entry" menuId menuLabel showsPrec _ (MenuEntryMne menuId menuLabel menuMne menuMod menuAcc) = showFunc5 "menu_entry_mne" menuId menuLabel menuMne menuMod menuAcc showsPrec _ (SubmenuEntry menuId menuLabel menuEntries) = showFunc3 "submenu_entry" menuId menuLabel menuEntries showsPrec _ (SubmenuEntryMne menuId menuLabel menuEntries menuMne) = showFunc4 "submenu_entry_mne" menuId menuLabel menuEntries menuMne showsPrec _ BlankMenuEntry = showString "blank" showsPrec _ (MenuEntryDisabled menuId menuLabel) = showFunc2 "menu_entry_disabled" menuId menuLabel showsPrec _ (SubmenuEntryDisabled menuId menuLabel menuEntries) = showFunc3 "submenu_entry_disabled" menuId menuLabel menuEntries instance Show IconEntry where showsPrec _ (IconEntry iconId filename descr) = showFunc3 "icon_entry" iconId filename descr showsPrec _ BlankIconEntry = showString "blank" --------------------------------------------------------------------------- instance Show VisualRule where showsPrec _ (NR typ attrs) = showFunc2 "nr" typ attrs showsPrec _ (ER typ attrs) = showFunc2 "er" typ attrs showList = showLst --------------------------------------------------------------------------- instance Show NodeId where showsPrec _ (NodeId s) = shows s showList = showLst instance Show EdgeId where showsPrec _ (EdgeId s) = shows s showList = showLst instance Show MenuId where showsPrec _ (MenuId s) = shows s showList = showLst instance Show MenuLabel where showsPrec _ (MenuLabel s) = shows s showList = showLst instance Show MenuMne where showsPrec _ (MenuMne s) = shows s showList = showLst instance Show MenuAcc where showsPrec _ (MenuAcc s) = shows s showList = showLst instance Show IconId where showsPrec _ (IconId s) = shows s showList = showLst instance Show Type where showsPrec _ (Type s) = shows s showList = showLst instance Show Filename where showsPrec _ (Filename s) = shows s showList = showLst instance Show ContextId where showsPrec _ (ContextId s) = shows s showList = showLst instance Show WindowId where showsPrec _ (WindowId s) = shows s showList = showLst --------------------------------------------------------------------------- instance Show Orient where showsPrec _ TopDown = showString "top_down" showsPrec _ BottomUp = showString "bottom_up" showsPrec _ LeftRight = showString "left_right" showsPrec _ RightLeft = showString "right_left" instance Show Direction where showsPrec _ Up = showString "up" showsPrec _ Down = showString "down" showsPrec _ DVLeft = showString "left" showsPrec _ DVRight = showString "right" instance Show Btype where showsPrec _ (Bt txt pat post) = showFunc3 "bt" txt pat post instance Show MenuMod where showsPrec _ Alternate = showString "alt" showsPrec _ Shift = showString "shift" showsPrec _ Control = showString "control" showsPrec _ Meta = showString "meta" showsPrec _ None = showString "none" --------------------------------------------------------------------------- showFunc1 :: Show a => String -> a -> ShowS showFunc1 funcName arg1 = showString funcName . showParen True (shows arg1) showFunc2 :: (Show a,Show b) => String -> a -> b -> ShowS showFunc2 funcName arg1 arg2 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2) showFunc3 :: (Show a,Show b,Show c) => String -> a -> b -> c -> ShowS showFunc3 funcName arg1 arg2 arg3 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3) showFunc4 :: (Show a,Show b,Show c,Show d) => String -> a -> b -> c -> d -> ShowS showFunc4 funcName arg1 arg2 arg3 arg4 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4) showFunc5 :: (Show a,Show b,Show c,Show d,Show e) => String -> a -> b -> c -> d -> e -> ShowS showFunc5 funcName arg1 arg2 arg3 arg4 arg5 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4 . showChar ',' . shows arg5) showFunc6 :: (Show a,Show b,Show c,Show d,Show e,Show f) => String -> a -> b -> c -> d -> e -> f -> ShowS showFunc6 funcName arg1 arg2 arg3 arg4 arg5 arg6 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4 . showChar ',' . shows arg5 . showChar ',' . shows arg6) showFunc7 :: (Show a,Show b,Show c,Show d,Show e,Show f,Show g) => String -> a -> b -> c -> d -> e -> f -> g -> ShowS showFunc7 funcName arg1 arg2 arg3 arg4 arg5 arg6 arg7 = showString funcName . showParen True (shows arg1 . showChar ',' . shows arg2 . showChar ',' . shows arg3 . showChar ',' . shows arg4 . showChar ',' . shows arg5 . showChar ',' . shows arg6 . showChar ',' . shows arg7) showLabeled :: Show a => a -> ShowS -> ShowS showLabeled iD arg = showChar 'l' . showParen True (shows iD . showChar ',' . arg) showLst :: Show a => [a] -> ShowS showLst [] = showString "[]" showLst (x:xs) = showChar '[' . shows x . showl xs where showl [] = showChar ']' showl (y:ys) = showChar ',' . shows y . showl ys showBoolFunc :: String -> Bool -> ShowS showBoolFunc funcName flag = showString funcName . showParen True (showString (if flag then "true" else "false")) happy-2.1.7/examples/glr/expr-eval/0000755000000000000000000000000007346545000015340 5ustar0000000000000000happy-2.1.7/examples/glr/expr-eval/Expr.y0000644000000000000000000000121707346545000016451 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Int} : E '+' E { $1 + $3 } | E '*' E { $1 * $3 } | E '-' E { $1 - $3 } | '(' E ')' { $2 } | i { $1 } { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-2.1.7/examples/glr/expr-eval/Hugs.lhs0000644000000000000000000000116707346545000016763 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > putStrLn $ show (decode (forest_lookup f) r ::[Int]) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/expr-eval/Main.lhs0000644000000000000000000000170607346545000016740 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > (s:o) <- getArgs > let x = concat o > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ (if 'f' `elem` x then unlines (map show $ Map.toList f) else "") > ++ (if 'r' `elem` x then unlines (map show (decode (forest_lookup f) r ::[Int])) else "") > if 'g' `elem` x then toDV (Map.toList f) else return () > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup i f happy-2.1.7/examples/glr/expr-eval/Makefile0000644000000000000000000000103507346545000016777 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs OPT = -O DECODE = --decode expr : Expr.hs Main.lhs # might want to run happy with --ghc ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "1+2*4-3" runn : expr ./expr +RTS -s -RTS `perl -e 'print join ("+", (1 .. ${NUM}));'` | tee out-${NUM} cat expr.stat >> out-${NUM} eof : expr echo testing premature eof ./expr "1+2*" err : expr echo testing syntax error ./expr "1+2*2++3" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-2.1.7/examples/glr/expr-eval/README0000644000000000000000000000042307346545000016217 0ustar0000000000000000 Example of arithmetic expression parsing, with decoding of semantic values (ie it gives a list of possible results of computation). "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () happy-2.1.7/examples/glr/expr-monad/0000755000000000000000000000000007346545000015507 5ustar0000000000000000happy-2.1.7/examples/glr/expr-monad/Expr.y0000644000000000000000000000136407346545000016623 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %monad { IO } { (>>=) } { return } %lexer { lexer } { TokenEOF } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Int} : E '+' E {% {-print ($1,$3) >>-} if odd $3 then fail "odd num" else return ($1 + $3) } | E '*' E { $1 * $3 } | E '-' E { $1 - $3 } | '(' E ')' { $2 } | i { $1 } { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-2.1.7/examples/glr/expr-monad/Hugs.lhs0000644000000000000000000000125707346545000017132 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > let ms = decode (forest_lookup f) r ::[IO Int] > mapM_ (\ma -> catch ma (\_ -> return 0) >>= print) ms > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/expr-monad/Main.lhs0000644000000000000000000000157607346545000017114 0ustar0000000000000000> module Main where > import System.IO.Error(catchIOError) > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > let ms = decode (forest_lookup f) r ::[IO Int] > mapM_ (\ma -> catchIOError ma (\_ -> return 0) >>= print) ms > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup i f happy-2.1.7/examples/glr/expr-monad/Makefile0000644000000000000000000000056207346545000017152 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs DECODE = --decode expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "1+2*4-3" eof : expr echo testing premature eof ./expr "1+2*" err : expr echo testing syntax error ./expr "1+2*2++3" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-2.1.7/examples/glr/expr-monad/README0000644000000000000000000000063007346545000016366 0ustar0000000000000000 Example of arithmetic expression parsing, with decoding of semantic values (ie it gives a list of possible results of computation). BUT: it runs the computations under a monad. In this example, certain cases of addition fail, which are caught and shown as zeros. "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () happy-2.1.7/examples/glr/expr-tree/0000755000000000000000000000000007346545000015350 5ustar0000000000000000happy-2.1.7/examples/glr/expr-tree/Expr.y0000644000000000000000000000131507346545000016460 0ustar0000000000000000{ -- only list imports here import Data.Char import Tree } %tokentype { Token } %lexer { lexer } { TokenEOF } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Tree ForestId Int} : E '+' E { Plus $1 $3 } | E '*' E { Times $1 $3 } | E '-' E { Minus $1 $3 } | '(' E ')' { Pars $2 } | i { Const $1 } { data Token = TokenEOF | Sym Char | AnInt {getInt :: Int} deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-2.1.7/examples/glr/expr-tree/Hugs.lhs0000644000000000000000000000107107346545000016765 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/expr-tree/Main.lhs0000644000000000000000000000151407346545000016745 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > (s:o) <- getArgs > let x = concat o > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ (if 'f' `elem` x then unlines (map show $ Map.toList f) else "") > if 'g' `elem` x then toDV (Map.toList f) else return () > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-2.1.7/examples/glr/expr-tree/Makefile0000644000000000000000000000101507346545000017005 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs OPT = -O2 expr : Expr.hs Main.lhs # might want to run happy with --ghc ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "1+2*4-3" runn : expr ./expr +RTS -s -RTS `perl -e 'print join ("+", (1 .. ${NUM}));'` | tee out-${NUM} cat expr.stat >> out-${NUM} eof : expr echo testing premature eof ./expr "1+2*" err : expr echo testing syntax error ./expr "1+2*2++3" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-2.1.7/examples/glr/expr-tree/README0000644000000000000000000000056307346545000016234 0ustar0000000000000000 Example of arithmetic expression parsing, but producing a labelled forest. Note use of polymorphic type in the labels. See the code more more discussion. "make run" to run the test case. For Hugs, load up Hugs.lhs - it is a simplified version of Main, with entry point "test :: String -> IO ()" NOTE: you need the -98 flag on Hugs, owing to non-standard class use happy-2.1.7/examples/glr/expr-tree/Tree.lhs0000644000000000000000000000106407346545000016760 0ustar0000000000000000> module Tree where > data Tree a b > = Plus a a > | Times a a > | Minus a a > | Pars a > | Const b > deriving (Show) Note: + we need a construct for the location of parentheses + sometimes it is useful to keep this information anyway -- eg ghc's implementation of customisable prec & assoc. + I've left Trees polymorphic in the "branch" type - this supports labelling the forest with Int-based trees then switching to Tree-based trees later + But this might require some non-Haskell-98 flags for the related class instances. happy-2.1.7/examples/glr/hidden-leftrec/0000755000000000000000000000000007346545000016312 5ustar0000000000000000happy-2.1.7/examples/glr/hidden-leftrec/Expr.y0000644000000000000000000000101407346545000017416 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token '+' { Sym '+' } i { AnInt $$ } %% R : Q {} Q : B Q i {} | S {} S : A S i {} | '+' {} | Q i {} B : {} A : {} { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-2.1.7/examples/glr/hidden-leftrec/Hugs.lhs0000644000000000000000000000107107346545000017727 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/hidden-leftrec/Main.lhs0000644000000000000000000000133207346545000017705 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-2.1.7/examples/glr/hidden-leftrec/Makefile0000644000000000000000000000053607346545000017756 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "+ 1 1 1 1 1 1 " eof : expr echo testing premature eof ./expr "" err : expr echo testing syntax error ./expr "+ 1 +" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-2.1.7/examples/glr/hidden-leftrec/README0000644000000000000000000000141607346545000017174 0ustar0000000000000000 Example of hidden left recursion The key point is that it has rules of form (X -> A X z), where A may match the empty string. The original GLR algorithm will loop on such productions, since the reduction (A -> empty) is always possible. The grammar is based on the one in Rekers[1], pointed out to me by Joost Visser. Q -> A Q i | + A -> I have made it a bit more complex, adding a second layer of hidden recursion and allowing jumps from the second layer to the first. --- "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () Don't forget to look at the graphs! --- [1] J. Rekers, "Parser Generation for Interactive Environments", PhD thesis, University of Amsterdam 1992. happy-2.1.7/examples/glr/highly-ambiguous/0000755000000000000000000000000007346545000016712 5ustar0000000000000000happy-2.1.7/examples/glr/highly-ambiguous/Expr.y0000644000000000000000000000135107346545000020022 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token 'b' { Sym _ } %% -- grammar taken from -- "Generalised LR Parsing in Haskell" -- Joao Fernandes, Joao Saraiva, and Joost Visser -- Universidade do Minho, Braga, Portugal -- submitted to AFP'04 summer school -- (Original source of grammar not identified by them) S : T {} T : A 'b' {} | T T T {} A : T 'b' A A A {} | T T 'b' {} | {} { data Token = TokenEOF | Sym Char | AnInt Int deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no } happy-2.1.7/examples/glr/highly-ambiguous/Hugs.lhs0000644000000000000000000000112607346545000020330 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test (read s :: Int) > test n > = do > case doParse $ map (:[]) $ lexer $ replicate n '+' of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/highly-ambiguous/Main.lhs0000644000000000000000000000136107346545000020307 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer $ replicate (read s) '+' of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-2.1.7/examples/glr/highly-ambiguous/Makefile0000644000000000000000000000053107346545000020351 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs NUM=20 run : expr ./expr +RTS -s -RTS ${NUM} | grep ^Ok run30 : make run NUM=30 test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci tar : tar chzf aj2.tgz Expr*hs Expr*y Main*hs D*hs happy-2.1.7/examples/glr/highly-ambiguous/README0000644000000000000000000000141507346545000017573 0ustar0000000000000000 Example of a highly ambiguous grammar It is a grammar taken from [1], although it is an example from the literature (the draft paper didn't mention which source). There is an explosion of possibilities because many parse stacks need to be kept active. Inputs of sizes above 25 will get very expensive to parse, with the current parser driver; but this seems no worse (if not better) than other implementations that produce a packed forest. --- "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () --- [1] "Generalised LR Parsing in Haskell" Joao Fernandes, Joao Saraiva, and Joost Visser Universidade do Minho, Braga, Portugal submitted to AFP'04 summer school happy-2.1.7/examples/glr/nlp/0000755000000000000000000000000007346545000014226 5ustar0000000000000000happy-2.1.7/examples/glr/nlp/English.y0000644000000000000000000000226707346545000016020 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token det { Det $$ } prep { Prep $$ } noun { Noun $$ } transvb { Verb Trans $$ } intransvb { Verb Intrans $$ } %% S : NP VP {} NP : det noun {} | NP PP {} PP : prep NP {} VP : transvb NP {} | intransvb {} | VP PP {} { data Token = TokenEOF | Noun String | Verb Arity String | Prep String | Det String deriving (Show,Eq,Ord) data Arity = Trans | Intrans deriving (Show,Eq,Ord) lexer :: String -> [[Token]] lexer = map lex_word . words -- simple lexicon -- (no claims to accuracy) lex_word w@"the" = [Det w] lex_word w@"a" = [Det w] lex_word w@"some" = [Det w] lex_word w@"in" = [Prep w] lex_word w@"with" = [Prep w] lex_word w@"park" = [Verb Trans w, Noun w] lex_word w@"man" = [Verb Trans w, Noun w] lex_word w@"saw" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w@"run" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w@"race" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w@"telescope" = [Verb Trans w, Verb Intrans w, Noun w] lex_word w = error $ "Not know: " ++ show w } happy-2.1.7/examples/glr/nlp/Hugs.lhs0000644000000000000000000000106007346545000015641 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import English > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/nlp/Main.lhs0000644000000000000000000000122407346545000015621 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import English #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f happy-2.1.7/examples/glr/nlp/Makefile0000644000000000000000000000067007346545000015671 0ustar0000000000000000TOP=.. include ${TOP}/Makefile.defs english : English.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o english --make Main.lhs run : english ./english "the man saw the race with a telescope" eof : english echo testing premature eof ./english "the man saw a" err : english echo testing syntax error ./english "the the man saw race" test : run eof err clean : rm -rf english English.info English.hs EnglishData.hs *.o *.hi out.daVinci happy-2.1.7/examples/glr/nlp/README0000644000000000000000000000105307346545000015105 0ustar0000000000000000 Obligatory NL ambiguity example. The grammar is small and simple, but exhibits prepositional phrase attachment ambiguity. Example: "the man saw the race with a telescope" Can be bracketed as the following (a) "the man saw (the race with a telescope)" (b) "(the man saw the race) with a telescope" Note: the "lexicon" contains some ambiguous words too - see if you can extend the grammar so this comes into play. "make run" to run the test case. For Hugs, load up Hugs.lhs - it is a simplified version, with entry point "test :: String -> IO ()" happy-2.1.7/examples/glr/packing/0000755000000000000000000000000007346545000015051 5ustar0000000000000000happy-2.1.7/examples/glr/packing/Expr.y0000644000000000000000000000057007346545000016163 0ustar0000000000000000{ -- only list imports here import Data.Char } %tokentype { Token } %lexer { lexer } { TokenEOF } %token i { Thing } %% S : A S {} | {} A : B {} B : C {} C : D {} | E {} D : i {} E : i F {} F : {} { data Token = TokenEOF | Thing deriving (Show,Eq, Ord) lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) = Thing : lexer cs } happy-2.1.7/examples/glr/packing/Hugs.lhs0000644000000000000000000000107107346545000016466 0ustar0000000000000000> module Main where > import System(getArgs) > import Data.Maybe(fromJust) > import FiniteMap(fmToList,lookupFM) > import Expr > main > = do > [s] <- getArgs > test s > test s > = do > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ fmToList f) > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ fmToList f) > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > forest_lookup f i > = fromJust $ lookupFM f i happy-2.1.7/examples/glr/packing/Main.lhs0000644000000000000000000000133207346545000016444 0ustar0000000000000000> module Main where > import System.Environment(getArgs) > import Data.Maybe(fromJust) > import qualified Data.Map as Map > import Expr #include "DV_lhs" This requires CPP / preprocessing; use Hugs.lhs for tests with Hugs > main > = do > [s] <- getArgs > case doParse $ map (:[]) $ lexer s of > ParseOK r f -> do > putStrLn $ "Ok " ++ show r ++ "\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseEOF f -> do > putStrLn $ "Premature end of input:\n" > ++ unlines (map show $ Map.toList f) > toDV $ Map.toList f > ParseError ts f -> do > putStrLn $ "Error: " ++ show ts > toDV $ Map.toList f > forest_lookup f i > = fromJust $ Map.lookup f i happy-2.1.7/examples/glr/packing/Makefile0000644000000000000000000000064407346545000016515 0ustar0000000000000000TOP = .. include $(TOP)/Makefile.defs FILTER = --filter FILTER = .y.hs : ${HAPPY} -i -l $*.y ${FILTER} expr : Expr.hs Main.lhs ${GHC} -cpp -fglasgow-exts -o expr --make Main.lhs run : expr ./expr "+ 1 1 1 1 1 1 " eof : expr echo testing premature eof ./expr "" err : expr echo testing syntax error ./expr "+ 1 +" test : run eof err clean : rm -rf expr Expr.info Expr.hs ExprData.hs *.o *.hi out.daVinci happy-2.1.7/examples/glr/packing/README0000644000000000000000000000053407346545000015733 0ustar0000000000000000 Test case for packing Grammar allows different (asymmetric) routes for category C, which may get packed at different times --- "make run" to run the test case. For Hugs, load up Hugs.lhs - it doesn't produce graphs, and has easy entry point "test :: String -> IO () correct behaviour is packing of ambiguity for all C nodes (for D and E). happy-2.1.7/examples/igloo/0000755000000000000000000000000007346545000013762 5ustar0000000000000000happy-2.1.7/examples/igloo/Foo.hs0000644000000000000000000000037407346545000015045 0ustar0000000000000000 module Main (main) where import Parser (parse) import System.IO (hPutStrLn, stderr) main :: IO () main = do x <- getContents case parse x of Left e -> hPutStrLn stderr $ "Failed with: " ++ e Right t -> print t happy-2.1.7/examples/igloo/Lexer.x0000644000000000000000000000173607346545000015241 0ustar0000000000000000 { module Lexer (lex_tok) where import Control.Monad.State (StateT, get) import ParserM (ParserM (..), mkT, Token(..), St, start_code, StartCode, Action, set_start_code, show_pos, position, input, AlexInput, alexGetByte, alexInputPrevChar) } words :- <0> $white+ ; <0> fork { mkT TFork } <0> leaf { mkT TLeaf } { get_tok :: AlexInput -> StateT St (Either String) (Token, AlexInput) get_tok = \i -> do st <- get case alexScan i (start_code st) of AlexEOF -> return (TEOF, i) AlexError _ -> fail $ "Lexical error at " ++ show_pos (position i) AlexSkip i' _ -> get_tok i' AlexToken i' l a -> a (i', take l (input i)) begin :: StartCode -> Action begin sc (i, _) = do set_start_code sc get_tok i lex_tok :: (Token -> ParserM a) -> ParserM a lex_tok cont = ParserM $ \i -> do (tok, iz) <- get_tok i case cont tok of ParserM x -> x iz } happy-2.1.7/examples/igloo/Makefile0000644000000000000000000000034407346545000015423 0ustar0000000000000000 all: alex Lexer.x happy Parser.y ghc --make Foo -o foo test: echo fork leaf leaf | ./foo -echo fork leaf leafqleaf | ./foo -echo leaf leaf leaf leaf leaf | ./foo @echo ok clean: rm -f *.o *.hi Parser.hs Lexer.hs foo happy-2.1.7/examples/igloo/Parser.y0000644000000000000000000000072407346545000015413 0ustar0000000000000000 { module Parser (parse) where import Lexer (lex_tok) import ParserM (Token(..), Tree(..), ParserM, run_parser, get_pos, show_pos, happyError) } %name parsex tree %tokentype { Token } %monad { ParserM } %lexer { lex_tok } { TEOF } %token fork { TFork } leaf { TLeaf } %% tree :: { Tree } tree : leaf { Leaf } | fork tree tree { Fork $2 $3 } { parse :: String -> Either String Tree parse = run_parser parsex } happy-2.1.7/examples/igloo/ParserM.hs0000644000000000000000000000566407346545000015702 0ustar0000000000000000 module ParserM ( -- Parser Monad ParserM(..), AlexInput, run_parser, -- Parser state St, StartCode, start_code, set_start_code, -- Tokens Token(..), -- Tree Tree(..), -- Actions Action, andBegin, mkT, -- Positions get_pos, show_pos, -- Input alexGetByte, alexInputPrevChar, input, position, -- Other happyError ) where import Control.Applicative (Applicative(..)) import Control.Monad (ap, liftM) import Control.Monad.Except (throwError) import Control.Monad.State (StateT, evalStateT, get, put) import Control.Monad.Trans (lift) import Data.Char (ord) import Data.Word (Word8) -- Parser Monad newtype ParserM a = ParserM (AlexInput -> StateT St (Either String) (AlexInput, a)) instance Functor ParserM where fmap = liftM instance Applicative ParserM where pure a = ParserM $ \i -> return (i, a) (<*>) = ap instance Monad ParserM where return = pure ParserM m >>= k = ParserM $ \i -> do (i', x) <- m i case k x of ParserM y -> y i' fail err = ParserM $ \_ -> fail err run_parser :: ParserM a -> (String -> Either String a) run_parser (ParserM p) = \s -> case evalStateT (p (AlexInput init_pos s)) init_state of Left es -> throwError es Right (_, x) -> return x -- Parser state data St = St {start_code :: !StartCode} type StartCode = Int init_state :: St init_state = St 0 -- Tokens data Token = TEOF | TFork | TLeaf -- Tree data Tree = Leaf | Fork Tree Tree deriving Show -- Actions type Action = (AlexInput, String) -> StateT St (Either String) (Token, AlexInput) set_start_code :: StartCode -> StateT St (Either String) () set_start_code sc = do st <- get put $ st { start_code = sc } andBegin :: Action -> StartCode -> Action (act `andBegin` sc) x = do set_start_code sc act x mkT :: Token -> Action mkT t (p,_) = lift $ return (t, p) -- Positions data Pos = Pos !Int{- Line -} !Int{- Column -} get_pos :: ParserM Pos get_pos = ParserM $ \i@(AlexInput p _) -> return (i, p) alexMove :: Pos -> Char -> Pos alexMove (Pos l _) '\n' = Pos (l+1) 1 alexMove (Pos l c) '\t' = Pos l ((c+8) `div` 8 * 8) alexMove (Pos l c) _ = Pos l (c+1) init_pos :: Pos init_pos = Pos 1 1 show_pos :: Pos -> String show_pos (Pos l c) = "line " ++ show l ++ ", column " ++ show c -- Input data AlexInput = AlexInput {position :: !Pos, input :: String} alexGetByte :: AlexInput -> Maybe (Word8,AlexInput) alexGetByte (AlexInput p (x:xs)) = Just (fromIntegral (ord x), AlexInput (alexMove p x) xs) alexGetByte (AlexInput _ []) = Nothing alexInputPrevChar :: AlexInput -> Char alexInputPrevChar _ = error "Lexer doesn't implement alexInputPrevChar" happyError :: ParserM a happyError = do p <- get_pos fail $ "Parse error at " ++ show_pos p happy-2.1.7/examples/igloo/README0000644000000000000000000000125707346545000014647 0ustar0000000000000000From: Ian Lynagh [igloo@earth.li] Subject: happy and line numbers Date: Thu 12/02/2004 18:48 I think it would be nice to have an example of how to have a nice position tracking monadic parser calling a lexer per token in the examples directory. I've attached a cut-down parser of mine that does it well enough for me. The only slight niggle is that parse errors are reported at the end of the token rather than the start, but that hasn't bothered me enough to look into fixing it yet. The cut down parser doesn't use start codes, but I've left the machinery in to make it easier for people to see how to use them. Naturally any suggestions for improving it would be gladly received! happy-2.1.7/happy.cabal0000644000000000000000000001166007346545000013144 0ustar0000000000000000name: happy version: 2.1.7 license: BSD2 license-file: LICENSE copyright: (c) Andy Gill, Simon Marlow author: Andy Gill and Simon Marlow maintainer: https://github.com/haskell/happy bug-reports: https://github.com/haskell/happy/issues stability: stable homepage: https://www.haskell.org/happy/ synopsis: Happy is a parser generator for Haskell category: Development cabal-version: >= 1.10 build-type: Simple Description: Happy is a parser generator for Haskell. Given a grammar specification in BNF, Happy generates Haskell code to parse the grammar. Happy works in a similar way to the @yacc@ tool for C. tested-with: GHC == 9.12.2 GHC == 9.10.2 GHC == 9.8.4 GHC == 9.6.7 GHC == 9.4.8 GHC == 9.2.8 GHC == 9.0.2 GHC == 8.10.7 GHC == 8.8.4 GHC == 8.6.5 GHC == 8.4.4 GHC == 8.2.2 GHC == 8.0.2 extra-source-files: ChangeLog.md Makefile README.md examples/glr/nlp/Main.lhs examples/glr/nlp/Makefile examples/glr/nlp/README examples/glr/nlp/English.y examples/glr/nlp/Hugs.lhs examples/glr/Makefile examples/glr/Makefile.defs examples/glr/expr-eval/Main.lhs examples/glr/expr-eval/Makefile examples/glr/expr-eval/Expr.y examples/glr/expr-eval/README examples/glr/expr-eval/Hugs.lhs examples/glr/expr-tree/Main.lhs examples/glr/expr-tree/Makefile examples/glr/expr-tree/Expr.y examples/glr/expr-tree/README examples/glr/expr-tree/Tree.lhs examples/glr/expr-tree/Hugs.lhs examples/glr/highly-ambiguous/Main.lhs examples/glr/highly-ambiguous/Makefile examples/glr/highly-ambiguous/Expr.y examples/glr/highly-ambiguous/README examples/glr/highly-ambiguous/Hugs.lhs examples/glr/hidden-leftrec/Main.lhs examples/glr/hidden-leftrec/Makefile examples/glr/hidden-leftrec/Expr.y examples/glr/hidden-leftrec/README examples/glr/hidden-leftrec/Hugs.lhs examples/glr/expr-monad/Main.lhs examples/glr/expr-monad/Makefile examples/glr/expr-monad/Expr.y examples/glr/expr-monad/README examples/glr/expr-monad/Hugs.lhs examples/glr/bio-eg/Main.lhs examples/glr/bio-eg/Makefile examples/glr/bio-eg/Bio.y examples/glr/bio-eg/README examples/glr/bio-eg/1-1200.dna examples/glr/bio-eg/1-600.dna examples/glr/common/DV_lhs examples/glr/common/DaVinciTypes.hs examples/glr/packing/Main.lhs examples/glr/packing/Makefile examples/glr/packing/Expr.y examples/glr/packing/README examples/glr/packing/Hugs.lhs examples/PgnParser.ly examples/MonadTest.ly examples/igloo/ParserM.hs examples/igloo/Makefile examples/igloo/Parser.y examples/igloo/Foo.hs examples/igloo/README examples/igloo/Lexer.x examples/README examples/Calc.ly examples/DavesExample.ly examples/ErrorTest.ly examples/ErlParser.ly examples/SimonsExample.ly examples/LexerTest.ly tests/AttrGrammar001.y tests/AttrGrammar002.y tests/Makefile tests/Partial.ly tests/Test.ly tests/TestMulti.ly tests/TestPrecedence.ly tests/bogus-token.y tests/bug001.ly tests/bug002.y tests/error001.stderr tests/error001.stdout tests/error001.y tests/monad001.y tests/monad002.ly tests/monaderror.y tests/Pragma.y tests/precedence001.ly tests/precedence002.y tests/test_rules.y tests/issue91.y tests/issue93.y tests/issue94.y tests/issue95.y tests/monaderror-explist.y tests/typeclass_monad001.y tests/typeclass_monad002.ly tests/typeclass_monad_lexer.y tests/rank2.y tests/shift01.y source-repository head type: git location: https://github.com/haskell/happy.git executable happy hs-source-dirs: app main-is: Main.lhs build-depends: base >= 4.9 && < 5, array, containers >= 0.4.2, mtl >= 2.2.1, happy-lib == 2.1.7 default-language: Haskell98 default-extensions: CPP, MagicHash, FlexibleContexts, NamedFieldPuns ghc-options: -Wall -Wno-incomplete-uni-patterns other-modules: Paths_happy test-suite tests type: exitcode-stdio-1.0 main-is: test.hs -- This line is important as it ensures that the local `exe:happy` component declared above is built before the test-suite component is invoked, as well as making sure that `happy` is made available on $PATH and `$happy_datadir` is set accordingly before invoking `test.hs` build-tools: happy ghc-options: -threaded build-depends: base >= 4.9 && < 5, process < 1.7 default-language: Haskell98 happy-2.1.7/test.hs0000644000000000000000000000055007346545000012346 0ustar0000000000000000import Data.List (intercalate) import GHC.Conc (numCapabilities) import System.Process (system) import System.Exit (exitWith) main = do let jFlag = "-j" ++ show numCapabilities -- to run tests in parallel, run `cabal test --test-options="+RTS -N"` let cmd = ["make", jFlag, "-k", "-C", "tests", "clean", "all"] system (intercalate " " cmd) >>= exitWith happy-2.1.7/tests/0000755000000000000000000000000007346545000012175 5ustar0000000000000000happy-2.1.7/tests/AttrGrammar001.y0000644000000000000000000000242707346545000015036 0ustar0000000000000000{ import Control.Monad (unless) } %tokentype { Char } %token a { 'a' } %token b { 'b' } %token c { 'c' } %attributetype { Attrs a } %attribute value { a } %attribute len { Int } %name parse abcstring %monad { Maybe } %% abcstring : alist blist clist { $$ = $1 ++ $2 ++ $3 ; $2.len = $1.len ; $3.len = $1.len } alist : a alist { $$ = $1 : $> ; $$.len = $>.len + 1 } | { $$ = []; $$.len = 0 } blist : b blist { $$ = $1 : $> ; $>.len = $$.len - 1 } | { $$ = [] ; where failUnless ($$.len == 0) "blist wrong length" } clist : c clist { $$ = $1 : $> ; $>.len = $$.len - 1 } | { $$ = [] ; where failUnless ($$.len == 0) "clist wrong length" } { happyError = error "parse error" failUnless b msg = unless b (fail msg) main = case parse "" of { Just _ -> case parse "abc" of { Just _ -> case parse "aaaabbbbcccc" of { Just _ -> case parse "abbcc" of { Nothing -> case parse "aabcc" of { Nothing -> case parse "aabbc" of { Nothing -> putStrLn "Test works"; _ -> quit } ; _ -> quit }; _ -> quit }; _ -> quit } ; _ -> quit }; _ -> quit } quit = putStrLn "Test failed" } happy-2.1.7/tests/AttrGrammar002.y0000644000000000000000000000227107346545000015034 0ustar0000000000000000 %tokentype { Char } %token minus { '-' } %token plus { '+' } %token one { '1' } %token zero { '0' } %attributetype { Attrs } %attribute value { Integer } %attribute pos { Int } %name parse start %monad { Maybe } %% start : num { $$ = $1 } num : bits { $$ = $1 ; $1.pos = 0 } | plus bits { $$ = $2 ; $2.pos = 0 } | minus bits { $$ = negate $2; $2.pos = 0 } bits : bit { $$ = $1 ; $1.pos = $$.pos } | bits bit { $$ = $1 + $2 ; $1.pos = $$.pos + 1 ; $2.pos = $$.pos } bit : zero { $$ = 0 } | one { $$ = 2^($$.pos) } { happyError msg = fail $ "parse error: "++msg main = case parse "" of { Nothing -> case parse "abc" of { Nothing -> case parse "0" of { Just 0 -> case parse "1" of { Just 1 -> case parse "101" of { Just 5 -> case parse "111" of { Just 7 -> case parse "10001" of { Just 17 -> putStrLn "Test worked"; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit }; _ -> quit } quit = putStrLn "Test Failed" } happy-2.1.7/tests/Makefile0000644000000000000000000000655407346545000013647 0ustar0000000000000000# NOTE: This assumes that a working `ghc` is on $PATH; this may not # necessarily be the same GHC used by `cabal` for building `happy`. # # Again, if HC has been set in the environment (e.g. by the CI), we keep this setting. # [2021-07-14, PR #196](https://github.com/haskell/happy/pull/196) # HC ?= ghc HC_OPTS=-package array -Wall -Werror -XHaskell98 # NOTE: `cabal test` will take care to build the local `happy` # executable and place it into $PATH for us to pick up. # (This is ensured by setting build-tool-depends.) # # If it doesn't look like the alex binary in $PATH comes from the # build tree, then we'll fall back to pointing to # ../dist/build/alex/alex to support running tests via "runghc # Setup.hs test". # # If HAPPY has been set outside, e.g. in the environment, we trust this setting. # This way, we can pass in the correct Happy executable from a CI environment # without danger of it being "fixed" by the logic below. # [2021-07-14, PR #196](https://github.com/haskell/happy/pull/196) # HAPPY ?= happy .PRECIOUS: %.n.hs %.c.hs %.o %.exe %.bin ifeq "$(TARGETPLATFORM)" "i386-unknown-mingw32" HS_PROG_EXT = .exe else HS_PROG_EXT = .bin endif TESTS = Test.ly TestMulti.ly TestPrecedence.ly bug001.ly \ monad001.y monad002.ly precedence001.ly precedence002.y \ bogus-token.y bug002.y Partial.ly \ issue91.y issue93.y issue94.y issue95.y \ test_rules.y monaderror.y monaderror-explist.y \ typeclass_monad001.y typeclass_monad002.ly typeclass_monad_lexer.y \ rank2.y shift01.y \ AttrGrammar001.y AttrGrammar002.y \ Pragma.y ERROR_TESTS = error001.y # NOTE: `cabal` will set the `happy_datadir` env-var accordingly before invoking the test-suite #TEST_HAPPY_OPTS = --strict --template=.. TEST_HAPPY_OPTS = --strict -g %.n.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) $< -o $@ %.n.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) $< -o $@ %.c.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -c $< -o $@ %.c.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -c $< -o $@ %.d.hs : %.y $(HAPPY) $(TEST_HAPPY_OPTS) -d $< -o $@ %.d.hs : %.ly $(HAPPY) $(TEST_HAPPY_OPTS) -d $< -o $@ CLEAN_FILES += *.n.hs *.c.hs *.info *.hi *.bin *.exe *.o *.run.stdout *.run.stderr ALL_TEST_HS = $(shell echo $(TESTS) | sed -e 's/\([^\. ]*\)\.\(l\)\{0,1\}y/\1.n.hs \1.c.hs/g') ALL_TESTS = $(patsubst %.hs, %.run, $(ALL_TEST_HS)) DEBUG_TESTS = Test.d$(HS_PROG_EXT) # Compile a single file with -d to ensure that it works CHECK_ERROR_TESTS = $(patsubst %, check.%, $(ERROR_TESTS)) HC_OPTS += -fforce-recomp .PRECIOUS: %.hs %.o %.bin %.$(HS_PROG_EXT) %.run : %$(HS_PROG_EXT) @echo "--> Checking $<..." ./$< path.run : # simply a test to output the path of the built happy executable, useful in CI @echo "--> Printing happy path..." which $(HAPPY) check.%.y : %.y @echo "--> Checking $<..." $(HAPPY) $(TEST_HAPPY_OPTS) -g $< 1>$*.run.stdout 2>$*.run.stderr || true sed -i '/^Up to date$$/d' $*.run.stdout $*.run.stderr @diff -u --ignore-all-space $*.stdout $*.run.stdout @diff -u --ignore-all-space $*.stderr $*.run.stderr %$(HS_PROG_EXT) : %.hs $(HC) $(HC_OPTS) $($*_LD_OPTS) $< -o $@ all :: path.run $(CHECK_ERROR_TESTS) $(DEBUG_TESTS) $(ALL_TESTS) check-todo:: $(HAPPY) $(TEST_HAPPY_OPTS) -d Test.ly $(HC) Test.hs -o happy_test ./happy_test -rm -f ./happy_test $(HAPPY) $(TEST_HAPPY_OPTS) -cd Test.ly $(HC) Test.hs -o happy_test ./happy_test -rm -f ./happy_test .PHONY: clean all check-todo path.run clean: $(RM) $(CLEAN_FILES) happy-2.1.7/tests/Partial.ly0000644000000000000000000001013007346545000014132 0ustar0000000000000000This is a simple test for happy. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calc Exp > %partial term Term > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The *new* system. %token let ( let ) in ( in ) int ( digit+ ) var ( {alpha}{alphanum}+ ) '=' ( = ) '+' ( + ) '-' ( - ) '*' ( * ) '/' ( / ) '(' ( \( ) ')' ( \) ) %whitespace ( {space}|{tab} ) %newline ( {newline} ) The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: { Exp1 } > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: { Term } > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: { Factor } > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 deriving Show > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term deriving Show > data Term = Times Term Factor | Div Term Factor | Factor Factor deriving Show > data Factor = Int Int | Var String | Brack Exp deriving Show The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalc :: String -> Exp > runCalc = calc . lexer > runTerm :: String -> Term > runTerm = term . lexer Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> > case runTerm "1 + 2 * 3" of { > Factor (Int 1) -> > case runTerm "1*2+3" of { > Times (Factor (Int 1)) (Int 2) -> > case runTerm "1*2*3" of { > Times (Times (Factor (Int 1)) (Int 2)) (Int 3) -> > print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/tests/Pragma.y0000644000000000000000000000156407346545000013604 0ustar0000000000000000{-# OPTIONS_HAPPY --ghc --array --coerce --strict #-} { import Data.Char } %tokentype { Token } %token '*' { Sym '*' } '+' { Sym '+' } '-' { Sym '-' } '(' { Sym '(' } ')' { Sym ')' } i { AnInt $$ } %% E :: {Tree} : E '+' E { Plus $1 $3 } | E '*' E { Times $1 $3 } | E '-' E { Minus $1 $3 } | '(' E ')' { Pars $2 } | i { Const $1 } { data Token = TokenEOF | Sym Char | AnInt {getInt :: Int} deriving (Show,Eq, Ord) data Tree = Plus Tree Tree | Times Tree Tree | Minus Tree Tree | Pars Tree | Const Int deriving Show lexer :: String -> [Token] lexer [] = [] lexer (' ':cs) = lexer cs lexer (c:cs) | c `elem` "+*-()" = Sym c : lexer cs lexer (c:cs) | isDigit c = let (yes,no) = span isDigit cs in AnInt (read $ c:yes) : lexer no happyError _ = error "Parse error" main :: IO () main = print $ happyParse $ lexer "1+3" } happy-2.1.7/tests/Test.ly0000644000000000000000000000733607346545000013473 0ustar0000000000000000This is a simple test for happy. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calc > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The *new* system. %token let ( let ) in ( in ) int ( digit+ ) var ( {alpha}{alphanum}+ ) '=' ( = ) '+' ( + ) '-' ( - ) '*' ( * ) '/' ( / ) '(' ( \( ) ')' ( \) ) %whitespace ( {space}|{tab} ) %newline ( {newline} ) The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: { Exp1 } > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: { Term } > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: { Factor } > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalc :: String -> Exp > runCalc = calc . lexer Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/tests/TestMulti.ly0000644000000000000000000000775207346545000014510 0ustar0000000000000000This is a simple test for happy. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calcExp Exp > %name calcTerm Term > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } The *new* system. %token let ( let ) in ( in ) int ( digit+ ) var ( {alpha}{alphanum}+ ) '=' ( = ) '+' ( + ) '-' ( - ) '*' ( * ) '/' ( / ) '(' ( \( ) ')' ( \) ) %whitespace ( {space}|{tab} ) %newline ( {newline} ) The left hand side are the names of the terminals or tokens, and the right hand side is how to pattern match them. Like yacc, we include %% here, for no real reason. > %% Now we have the production rules. > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp1 { Exp1 $1 } > > Exp1 :: { Exp1 } > Exp1 : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: { Term } > Term : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: { Factor } > Factor : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp = Let String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalcExp :: String -> Exp > runCalcExp = calcExp . lexer > runCalcTerm :: String -> Term > runCalcTerm = calcTerm . lexer Here we test our parser. > main = case runCalcExp "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalcExp "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalcExp "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalcExp "let x = 2 in x * (x - 2)" of { > (Let "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> > > case runCalcTerm "2 * (3 + 1)" of { > (Times (Factor (Int 2)) (Brack (Exp1 (Plus (Term (Factor (Int 3))) (Factor (Int 1)))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/tests/TestPrecedence.ly0000644000000000000000000000663007346545000015445 0ustar0000000000000000This is a simple test for happy using operator precedence. First thing to declare is the name of your parser, and the type of the tokens the parser reads. > { > import Data.Char > } > %name calc > %tokentype { Token } The parser will be of type [Token] -> ?, where ? is determined by the production rules. Now we declare all the possible tokens: > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '>' { TokenGreater } > '<' { TokenLess } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > UMINUS { TokenFoo } > %nonassoc '>' '<' > %left '+' '-' > %left '*' '/' > %left UMINUS > %% > Exp :: { Exp } > Exp : let var '=' Exp in Exp { Let $2 $4 $6 } > | Exp '>' Exp { Greater $1 $3 } > | Exp '<' Exp { Less $1 $3 } > | Exp '+' Exp { Plus $1 $3 } > | Exp '-' Exp { Minus $1 $3 } > | Exp '*' Exp { Times $1 $3 } > | Exp '/' Exp { Div $1 $3 } > | '-' Exp %prec UMINUS { Uminus $2 } > | '(' Exp ')' { Brack $2 } > | int { Int $1 } > | var { Var $1 } We are simply returning the parsed data structure ! Now we need some extra code, to support this parser, and make in complete: > { All parsers must declair this function, which is called when an error is detected. Note that currently we do no error recovery. > happyError tks = error "Parse error" Now we declare the datastructure that we are parsing. > data Exp > = Let String Exp Exp > | Greater Exp Exp > | Less Exp Exp > | Plus Exp Exp > | Minus Exp Exp > | Times Exp Exp > | Div Exp Exp > | Uminus Exp > | Brack Exp > | Int Int > | Var String > deriving Show The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenGreater > | TokenLess > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenFoo .. and a simple lexer that returns this datastructure. > lexer :: String -> [Token] > lexer [] = [] > lexer (c:cs) > | isSpace c = lexer cs > | isAlpha c = lexVar (c:cs) > | isDigit c = lexNum (c:cs) > lexer ('=':cs) = TokenEq : lexer cs > lexer ('>':cs) = TokenGreater : lexer cs > lexer ('<':cs) = TokenLess : lexer cs > lexer ('+':cs) = TokenPlus : lexer cs > lexer ('-':cs) = TokenMinus : lexer cs > lexer ('*':cs) = TokenTimes : lexer cs > lexer ('/':cs) = TokenDiv : lexer cs > lexer ('(':cs) = TokenOB : lexer cs > lexer (')':cs) = TokenCB : lexer cs > lexNum cs = TokenInt (read num) : lexer rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> TokenLet : lexer rest > ("in",rest) -> TokenIn : lexer rest > (var,rest) -> TokenVar var : lexer rest To run the program, call this in gofer, or use some code to print it. > runCalc :: String -> Exp > runCalc = calc . lexer Here we test our parser. > main = case runCalc "let x = 1 in let y = 2 in x * y + x / y" of { > (Let "x" (Int 1) (Let "y" (Int 2) (Plus (Times (Var "x") (Var "y")) (Div (Var "x") (Var "y"))))) -> > case runCalc "- 1 * - 2 + 3" of { > (Plus (Times (Uminus (Int 1)) (Uminus (Int 2))) (Int 3)) -> > case runCalc "- - - 1 + 2 * 3 - 4" of { > (Minus (Plus (Uminus (Uminus (Uminus (Int 1)))) (Times (Int 2) (Int 3))) (Int 4)) -> > print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } > > quit = print "Test failed\n"; > > } happy-2.1.7/tests/bogus-token.y0000644000000000000000000000051507346545000014625 0ustar0000000000000000{ module Main where import Control.Exception as Exception } %tokentype { Token } %token A { A } %name parse %% parse : A { () } { data Token = A | B test1 = parse [B] main = do Exception.try (print test1 >> fail "Test failed.") :: IO (Either ErrorCall ()) putStrLn "Test worked" happyError = error "parse error" } happy-2.1.7/tests/bug001.ly0000644000000000000000000000062207346545000013541 0ustar0000000000000000> %name parse > %tokentype { Token } > %token Int { TokenInt } > %% > Expr :: { Int } > Expr : Term { $1 } The constant in the next rule would be defaulted to Integer, but it is forced to Int by the type signature of Expr above. This test exposed a bug in the unsafeCoerce method. > Term : Int { 42 } > { > main = print (parse [TokenInt]) > > data Token = TokenInt > > happyError = error "" > } happy-2.1.7/tests/bug002.y0000644000000000000000000000034707346545000013372 0ustar0000000000000000{ module Main where } %name parser %token foo { 1 } %tokentype { Int } %% -- two productions for the same non-terminal should work Foo : {- empty -} { () } Foo : Foo foo { () } { main = return () happyError = undefined } happy-2.1.7/tests/error001.stderr0000644000000000000000000000024007346545000014770 0ustar0000000000000000error001.y: Multiple rules for 'foo' error001.y: 8: unknown identifier ''a'' error001.y: 10: unknown identifier ''a'' error001.y: 11: unknown identifier ''b'' happy-2.1.7/tests/error001.stdout0000644000000000000000000000000007346545000015001 0ustar0000000000000000happy-2.1.7/tests/error001.y0000644000000000000000000000012007346545000013732 0ustar0000000000000000%name foo %tokentype { Token } %% foo : 'a' { } bar : 'a' { } foo : 'b' { } happy-2.1.7/tests/issue91.y0000644000000000000000000000107707346545000013676 0ustar0000000000000000-- See for more information %name parse prod %tokentype { Tok } %monad { P } { bindP } { returnP } %error { error "parse error" } %lexer { lexer } { EOF } %token IDENT { Identifier $$ } %% prod :: { () } : IDENT { () } { data Tok = EOF | Identifier String type P a = String -> (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = let (x,s') = p s in f x s' returnP :: a -> P a returnP = (,) lexer :: (Tok -> P a) -> P a lexer cont s = cont (case s of { "" -> EOF; _ -> Identifier s }) "" main = pure () } happy-2.1.7/tests/issue93.y0000644000000000000000000020731607346545000013704 0ustar0000000000000000-- See for more information -- This is an example of a grammar that has more than 2^15 entries in `happyTable` (39817). { import System.Exit import Data.Char } %name parseLit lit %name parseAttr export_attribute %name parseTy export_ty %name parsePat pat %name parseStmt stmt %name parseExpr expr %name parseItem mod_item %name parseSourceFileContents source_file %name parseBlock export_block %name parseImplItem impl_item %name parseTraitItem trait_item %name parseTt token_tree %name parseTokenStream token_stream %name parseTyParam ty_param %name parseLifetimeDef lifetime_def %name parseWhereClause where_clause %name parseGenerics generics %tokentype { Token } %lexer { lexNonSpace `bindP` } { Eof } %monad { P } { bindP } { returnP } %error { parseError } %expect 0 %token '=' { Equal } '<' { Less } '>' { Greater } '!' { Exclamation } '~' { Tilde } '+' { Plus } '-' { Minus } '*' { Star } '/' { Slash } '%' { Percent } '^' { Caret } '&' { Ampersand } '|' { Pipe } '@' { At } '...' { DotDotDot } '..' { DotDot } '.' { Dot } ',' { Comma } ';' { Semicolon } '::' { ModSep } ':' { Colon } '->' { RArrow } '<-' { LArrow } '=>' { FatArrow } '#' { Pound } '$' { Dollar } '?' { Question } '#!' { Shebang } '||' { PipePipe } '&&' { AmpersandAmpersand } '>=' { GreaterEqual } '>>=' { GreaterGreaterEqual } '<<' { LessLess } '>>' { GreaterGreater } '==' { EqualEqual } '!=' { NotEqual } '<=' { LessEqual } '<<=' { LessLessEqual } '-=' { MinusEqual } '&=' { AmpersandEqual } '|=' { PipeEqual } '+=' { PlusEqual } '*=' { StarEqual } '/=' { SlashEqual } '^=' { CaretEqual } '%=' { PercentEqual } '(' { OpenParen } '[' { OpenBracket } '{' { OpenBrace } ')' { CloseParen } ']' { CloseBracket } '}' { CloseBrace } byte { ByteTok{} } char { CharTok{} } int { IntegerTok{} } float { FloatTok{} } str { StrTok{} } byteStr { ByteStrTok{} } rawStr { StrRawTok{} } rawByteStr { ByteStrRawTok{} } as { IdentTok "as" } box { IdentTok "box" } break { IdentTok "break" } const { IdentTok "const" } continue { IdentTok "continue" } crate { IdentTok "crate" } else { IdentTok "else" } enum { IdentTok "enum" } extern { IdentTok "extern" } false { IdentTok "false" } fn { IdentTok "fn" } for { IdentTok "for" } if { IdentTok "if" } impl { IdentTok "impl" } in { IdentTok "in" } let { IdentTok "let" } loop { IdentTok "loop" } match { IdentTok "match" } mod { IdentTok "mod" } move { IdentTok "move" } mut { IdentTok "mut" } pub { IdentTok "pub" } ref { IdentTok "ref" } return { IdentTok "return" } Self { IdentTok "Self" } self { IdentTok "self" } static { IdentTok "static" } struct { IdentTok "struct" } super { IdentTok "super" } trait { IdentTok "trait" } true { IdentTok "true" } type { IdentTok "type" } unsafe { IdentTok "unsafe" } use { IdentTok "use" } where { IdentTok "where" } while { IdentTok "while" } do { IdentTok "do" } abstract { IdentTok "abstract" } alignof { IdentTok "alignof" } become { IdentTok "become" } final { IdentTok "final" } macro { IdentTok "macro" } offsetof { IdentTok "offsetof" } override { IdentTok "override" } priv { IdentTok "priv" } proc { IdentTok "proc" } pure { IdentTok "pure" } sizeof { IdentTok "sizeof" } typeof { IdentTok "typeof" } unsized { IdentTok "unsized" } virtual { IdentTok "virtual" } yield { IdentTok "yield" } default { IdentTok "default" } union { IdentTok "union" } catch { IdentTok "catch" } outerDoc { OuterDoc } innerDoc { InnerDoc } IDENT { IdentTok{} } '_' { Underscore } LIFETIME { LifetimeTok _ } ntItem { Interpolated 0 } ntBlock { Interpolated 1 } ntStmt { Interpolated 2 } ntPat { Interpolated 3 } ntExpr { Interpolated 4 } ntTy { Interpolated 5 } ntIdent { Interpolated 6 } ntPath { Interpolated 7 } ntTT { Interpolated 8 } ntArm { Interpolated 9 } ntImplItem { Interpolated 10 } ntTraitItem { Interpolated 11 } ntGenerics { Interpolated 12 } ntWhereClause { Interpolated 13 } ntArg { Interpolated 14 } ntLit { Interpolated 15 } %nonassoc SEG %nonassoc mut DEF EQ '::' %nonassoc IDENT ntIdent default union catch self %nonassoc box return break continue IMPLTRAIT LAMBDA %right '=' '>>=' '<<=' '-=' '+=' '*=' '/=' '^=' '|=' '&=' '%=' %right '<-' %nonassoc SINGLERNG %nonassoc INFIXRNG %nonassoc POSTFIXRNG %nonassoc PREFIXRNG %nonassoc '..' '...' %left '||' %left '&&' %left '==' '!=' '<' '>' '<=' '>=' %left '|' %left '^' %left '&' %left '<<' '>>' %left '+' '-' %left '*' '/' '%' %nonassoc ':' as %nonassoc UNARY %nonassoc FIELD VIS PATH WHERE NOSEMI %nonassoc '?' '.' %nonassoc '{' ntBlock '[' '(' '!' ';' %% ident :: { Int } : ntIdent { 0 } | union { 1 } | default { 2 } | catch { 3 } | IDENT { 4 } gt :: { Int } : {- empty -} { 5 } some(p) :: { Int } : some(p) p { 6 } | p { 7 } many(p) :: { Int } : some(p) { 8 } | {- empty -} { 9 } sep_by1(p,sep) :: { Int } : sep_by1(p,sep) sep p { 10 } | p { 11 } sep_by(p,sep) :: { Int } : sep_by1(p,sep) { 12 } | {- empty -} { 13 } sep_by1T(p,sep) :: { Int } : sep_by1(p,sep) sep { 14 } | sep_by1(p,sep) { 15 } sep_byT(p,sep) :: { Int } : sep_by1T(p,sep) { 16 } | {- empty -} { 17 } source_file :: { Int } : inner_attrs many(mod_item) { 18 } | many(mod_item) { 19 } outer_attribute :: { Int } : '#' '[' mod_path token_stream ']' { 20 } | outerDoc { 21 } inner_attribute :: { Int } : '#' '!' '[' mod_path token_stream ']' { 22 } | '#!' '[' mod_path token_stream ']' { 23 } | innerDoc { 24 } inner_attrs :: { Int } : inner_attrs inner_attribute { 25 } | inner_attribute { 26 } lit :: { Int } : ntLit { 27 } | byte { 28 } | char { 29 } | int { 30 } | float { 31 } | true { 32 } | false { 33 } | string { 34 } string :: { Int } : str { 35 } | rawStr { 36 } | byteStr { 37 } | rawByteStr { 38 } qual_path(segs) :: { Int } : '<' qual_path_suf(segs) { 39 } | lt_ty_qual_path as ty_path '>' '::' segs { 40 } qual_path_suf(segs) :: { Int } : ty '>' '::' segs { 41 } | ty as ty_path '>' '::' segs { 42 } lt_ty_qual_path :: { Int } : '<<' qual_path_suf(path_segments_without_colons) { 43 } generic_values :: { Int } : '<' sep_by1(lifetime,',') ',' sep_by1T(ty,',') gt '>' { 45 } | '<' sep_by1(lifetime,',') ',' sep_by1T(binding,',') gt '>' { 46 } | '<' sep_by1T(lifetime,',') gt '>' { 47 } | '<' sep_by1(ty,',') ',' sep_by1T(binding,',') gt '>' { 48 } | '<' sep_by1T(ty,',') gt '>' { 49 } | '<' sep_by1T(binding,',') gt '>' { 50 } | '<' gt '>' { 51 } | lt_ty_qual_path ',' sep_by1T(ty,',') gt '>' { 53 } | lt_ty_qual_path ',' sep_by1T(binding,',') gt '>' { 54 } | lt_ty_qual_path gt '>' { 55 } binding :: { Int } : ident '=' ty { 56 } ty_path :: { Int } : ntPath { 57 } | path_segments_without_colons { 58 } | '::' path_segments_without_colons { 59 } ty_qual_path :: { Int } : qual_path(path_segments_without_colons) { 60 } path_segments_without_colons :: { Int } : sep_by1(path_segment_without_colons, '::') %prec SEG { 61 } path_segment_without_colons :: { Int } : self_or_ident path_parameter1 { 62 } path_parameter1 :: { Int } : generic_values { 63 } | '(' sep_byT(ty,',') ')' { 64 } | '(' sep_byT(ty,',') ')' '->' ty_no_plus { 65 } | {- empty -} %prec IDENT { 66 } expr_path :: { Int } : ntPath { 67 } | path_segments_with_colons { 68 } | '::' path_segments_with_colons { 69 } expr_qual_path :: { Int } : qual_path(path_segments_with_colons) { 70 } path_segments_with_colons :: { Int } : self_or_ident { 71 } | path_segments_with_colons '::' self_or_ident { 72 } | path_segments_with_colons '::' generic_values { 73 } mod_path :: { Int } : ntPath { 74 } | self_or_ident { 75 } | '::' self_or_ident { 76 } | mod_path '::' ident { 77 } lifetime :: { Int } : LIFETIME { 78 } trait_ref :: { Int } : ty_path { 79 } ty :: { Int } : ty_no_plus { 80 } | poly_trait_ref_mod_bound '+' sep_by1T(ty_param_bound_mod,'+') { 81 } ty_no_plus :: { Int } : ntTy { 82 } | no_for_ty { 83 } | for_ty_no_plus { 84 } ty_prim :: { Int } : no_for_ty_prim { 85 } | for_ty_no_plus { 86 } | poly_trait_ref_mod_bound '+' sep_by1T(ty_param_bound_mod,'+') { 87 } no_for_ty :: { Int } : no_for_ty_prim { 88 } | '(' ')' { 89 } | '(' ty ')' { 90 } | '(' ty ',' ')' { 91 } | '(' ty ',' sep_by1T(ty,',') ')' { 92 } | ty_qual_path { 93 } no_for_ty_prim :: { Int } : '_' { 94 } | '!' { 95 } | '[' ty ']' { 96 } | '*' ty_no_plus { 97 } | '*' const ty_no_plus { 98 } | '*' mut ty_no_plus { 99 } | '&' ty_no_plus { 100 } | '&' lifetime ty_no_plus { 101 } | '&' mut ty_no_plus { 102 } | '&' lifetime mut ty_no_plus { 103 } | '&&' ty_no_plus { 104 } | '&&' lifetime ty_no_plus { 105 } | '&&' mut ty_no_plus { 106 } | '&&' lifetime mut ty_no_plus { 107 } | ty_path %prec PATH { 108 } | ty_mac { 109 } | unsafe extern abi fn fn_decl(arg_general) { 110 } | unsafe fn fn_decl(arg_general) { 111 } | extern abi fn fn_decl(arg_general) { 112 } | fn fn_decl(arg_general) { 113 } | typeof '(' expr ')' { 114 } | '[' ty ';' expr ']' { 115 } | '?' trait_ref { 116 } | '?' for_lts trait_ref { 117 } for_ty_no_plus :: { Int } : for_lts unsafe extern abi fn fn_decl(arg_general) { 118 } | for_lts unsafe fn fn_decl(arg_general) { 119 } | for_lts extern abi fn fn_decl(arg_general) { 120 } | for_lts fn fn_decl(arg_general) { 121 } | for_lts trait_ref { 122 } impl_ty :: { Int } : impl sep_by1(ty_param_bound_mod,'+') %prec IMPLTRAIT { 123 } lifetime_mut :: { Int } : lifetime mut { 124 } | lifetime { 125 } | mut { 126 } | {- empty -} { 127 } fn_decl(arg) :: { Int } : '(' sep_by1(arg,',') ',' '...' ')' ret_ty { 128 } | '(' sep_byT(arg,',') ')' ret_ty { 129 } fn_decl_with_self_general :: { Int } : '(' arg_self_general ',' sep_byT(arg_general,',') ')' ret_ty { 130 } | '(' arg_self_general ')' ret_ty { 131 } | '(' ')' ret_ty { 132 } fn_decl_with_self_named :: { Int } : '(' arg_self_named ',' sep_by1(arg_named,',') ',' ')' ret_ty { 133 } | '(' arg_self_named ',' sep_by1(arg_named,',') ')' ret_ty { 134 } | '(' arg_self_named ',' ')' ret_ty { 135 } | '(' arg_self_named ')' ret_ty { 136 } | fn_decl(arg_named) { 137 } ty_param_bound :: { Int } : lifetime { 138 } | poly_trait_ref { 139 } poly_trait_ref_mod_bound :: { Int } : poly_trait_ref { 140 } | '?' poly_trait_ref { 141 } ty_param_bound_mod :: { Int } : ty_param_bound { 142 } | '?' poly_trait_ref { 143 } abi :: { Int } : str { 144 } | {- empty -} { 145 } ret_ty :: { Int } : '->' ty_no_plus { 146 } | '->' impl_ty { 147 } | {- empty -} { 148 } poly_trait_ref :: { Int } : trait_ref { 149 } | for_lts trait_ref { 150 } for_lts :: { Int } : for '<' sep_byT(lifetime_def,',') '>' { 151 } lifetime_def :: { Int } : many(outer_attribute) lifetime ':' sep_by1T(lifetime,'+') { 152 } | many(outer_attribute) lifetime { 153 } arg_named :: { Int } : ntArg { 154 } | pat ':' ty { 155 } arg_general :: { Int } : ntArg { 156 } | ty { 157 } | '_' ':' ty { 158 } | ident ':' ty { 159 } | mut ident ':' ty { 160 } | '&' '_' ':' ty { 161 } | '&' ident ':' ty { 162 } | '&&' '_' ':' ty { 163 } | '&&' ident ':' ty { 164 } arg_self_general :: { Int } : mut self { 165 } | self ':' ty { 166 } | mut self ':' ty { 167 } | arg_general { 168 } arg_self_named :: { Int } : self { 169 } | mut self { 170 } | '&' self { 171 } | '&' lifetime self { 172 } | '&' mut self { 173 } | '&' lifetime mut self { 174 } | self ':' ty { 175 } | mut self ':' ty { 176 } lambda_arg :: { Int } : ntArg { 177 } | pat ':' ty { 178 } | pat { 179 } pat :: { Int } : ntPat { 180 } | '_' { 181 } | '&' mut pat { 182 } | '&' pat { 183 } | '&&' mut pat { 184 } | '&&' pat { 185 } | lit_expr { 186 } | '-' lit_expr { 187 } | box pat { 188 } | binding_mode1 ident '@' pat { 189 } | binding_mode1 ident { 190 } | ident '@' pat { 191 } | expr_path { 192 } | expr_qual_path { 193 } | lit_or_path '...' lit_or_path { 194 } | expr_path '{' '..' '}' { 195 } | expr_path '{' pat_fields '}' { 196 } | expr_path '(' pat_tup ')' { 197 } | expr_mac { 198 } | '[' pat_slice ']' { 199 } | '(' pat_tup ')' { 200 } pat_tup :: { Int } : sep_by1(pat,',') ',' '..' ',' sep_by1(pat,',') { 201 } | sep_by1(pat,',') ',' '..' ',' sep_by1(pat,',') ',' { 202 } | sep_by1(pat,',') ',' '..' { 203 } | sep_by1(pat,',') { 204 } | sep_by1(pat,',') ',' { 205 } | '..' ',' sep_by1(pat,',') { 206 } | '..' ',' sep_by1(pat,',') ',' { 207 } | '..' { 208 } | {- empty -} { 209 } pat_slice :: { Int } : sep_by1(pat,',') ',' '..' ',' sep_by1T(pat,',') { 210 } | sep_by1(pat,',') ',' '..' { 211 } | sep_by1(pat,',') '..' ',' sep_by1T(pat,',') { 212 } | sep_by1(pat,',') '..' { 213 } | sep_by1T(pat,',') { 214 } | '..' ',' sep_by1T(pat,',') { 215 } | '..' { 216 } | {- empty -} { 217 } lit_or_path :: { Int } : expr_path { 218 } | expr_qual_path { 219 } | '-' lit_expr { 220 } | lit_expr { 221 } pat_fields :: { Int } : sep_byT(pat_field,',') { 222 } | sep_by1(pat_field,',') ',' '..' { 223 } pat_field :: { Int } : binding_mode ident { 224 } | box binding_mode ident { 225 } | binding_mode ident ':' pat { 226 } binding_mode1 :: { Int } : ref mut { 227 } | ref { 228 } | mut { 229 } binding_mode :: { Int } : binding_mode1 { 230 } | {- empty -} { 231 } gen_expression(lhs,rhs,rhs2) :: { Int } : ntExpr { 232 } | lit_expr { 233 } | '[' sep_byT(expr,',') ']' { 234 } | '[' inner_attrs sep_byT(expr,',') ']' { 235 } | '[' expr ';' expr ']' { 236 } | expr_mac { 237 } | expr_path %prec PATH { 238 } | expr_qual_path { 239 } | '*' rhs %prec UNARY { 240 } | '!' rhs %prec UNARY { 241 } | '-' rhs %prec UNARY { 242 } | '&' rhs %prec UNARY { 243 } | '&' mut rhs %prec UNARY { 244 } | '&&' rhs %prec UNARY { 245 } | '&&' mut rhs %prec UNARY { 246 } | box rhs %prec UNARY { 247 } | left_gen_expression(lhs,rhs,rhs2) { 248 } | '..' rhs2 %prec PREFIXRNG { 249 } | '...' rhs2 %prec PREFIXRNG { 250 } | '..' %prec SINGLERNG { 251 } | '...' %prec SINGLERNG { 252 } | return { 253 } | return rhs { 254 } | continue { 255 } | continue lifetime { 256 } | break { 257 } | break rhs { 258 } | break lifetime { 259 } | break lifetime rhs %prec break { 260 } | move lambda_args rhs %prec LAMBDA { 261 } | lambda_args rhs %prec LAMBDA { 262 } left_gen_expression(lhs,rhs,rhs2) :: { Int } : postfix_blockexpr(lhs) { 263 } | lhs '[' expr ']' { 264 } | lhs '(' sep_byT(expr,',') ')' { 265 } | lhs ':' ty_no_plus { 266 } | lhs as ty_no_plus { 267 } | lhs '*' rhs { 268 } | lhs '/' rhs { 269 } | lhs '%' rhs { 270 } | lhs '+' rhs { 271 } | lhs '-' rhs { 272 } | lhs '<<' rhs { 273 } | lhs '>>' rhs { 274 } | lhs '&' rhs { 275 } | lhs '^' rhs { 276 } | lhs '|' rhs { 277 } | lhs '==' rhs { 278 } | lhs '!=' rhs { 279 } | lhs '<' rhs { 280 } | lhs '>' rhs { 281 } | lhs '<=' rhs { 282 } | lhs '>=' rhs { 283 } | lhs '&&' rhs { 284 } | lhs '||' rhs { 285 } | lhs '..' %prec POSTFIXRNG { 286 } | lhs '...' %prec POSTFIXRNG { 287 } | lhs '..' rhs2 %prec INFIXRNG { 288 } | lhs '...' rhs2 %prec INFIXRNG { 289 } | lhs '<-' rhs { 290 } | lhs '=' rhs { 291 } | lhs '>>=' rhs { 292 } | lhs '<<=' rhs { 293 } | lhs '-=' rhs { 294 } | lhs '+=' rhs { 295 } | lhs '*=' rhs { 296 } | lhs '/=' rhs { 297 } | lhs '^=' rhs { 298 } | lhs '|=' rhs { 299 } | lhs '&=' rhs { 300 } | lhs '%=' rhs { 301 } postfix_blockexpr(lhs) :: { Int } : lhs '?' { 302 } | lhs '.' ident %prec FIELD { 303 } | lhs '.' ident '(' sep_byT(expr,',') ')' { 304 } | lhs '.' ident '::' '<' sep_byT(ty,',') '>' '(' sep_byT(expr,',') ')' { 305 } | lhs '.' int { 306 } expr :: { Int } : gen_expression(expr,expr,expr) { 307 } | paren_expr { 308 } | struct_expr { 309 } | block_expr { 310 } | lambda_expr_block { 311 } nostruct_expr :: { Int } : gen_expression(nostruct_expr,nostruct_expr,nonstructblock_expr) { 312 } | paren_expr { 313 } | block_expr { 314 } nonstructblock_expr :: { Int } : gen_expression(nonstructblock_expr,nostruct_expr,nonstructblock_expr) { 315 } | paren_expr { 316 } | block_like_expr { 317 } | unsafe inner_attrs_block { 318 } nonblock_expr :: { Int } : gen_expression(nonblock_expr,expr,expr) { 319 } | paren_expr { 320 } | struct_expr { 321 } | lambda_expr_block { 322 } blockpostfix_expr :: { Int } : postfix_blockexpr(block_like_expr) { 323 } | postfix_blockexpr(vis_safety_block) { 324 } | left_gen_expression(blockpostfix_expr,expr,expr) { 325 } lit_expr :: { Int } : lit { 326 } block_expr :: { Int } : block_like_expr { 327 } | inner_attrs_block { 328 } | unsafe inner_attrs_block { 329 } block_like_expr :: { Int } : if_expr { 330 } | loop inner_attrs_block { 331 } | lifetime ':' loop inner_attrs_block { 332 } | for pat in nostruct_expr inner_attrs_block { 333 } | lifetime ':' for pat in nostruct_expr inner_attrs_block { 334 } | while nostruct_expr inner_attrs_block { 335 } | lifetime ':' while nostruct_expr inner_attrs_block { 336 } | while let pat '=' nostruct_expr inner_attrs_block { 337 } | lifetime ':' while let pat '=' nostruct_expr inner_attrs_block { 338 } | match nostruct_expr '{' '}' { 339 } | match nostruct_expr '{' inner_attrs '}' { 340 } | match nostruct_expr '{' arms '}' { 341 } | match nostruct_expr '{' inner_attrs arms '}' { 342 } | expr_path '!' '{' token_stream '}' { 343 } | do catch inner_attrs_block { 344 } if_expr :: { Int } : if nostruct_expr block else_expr { 345 } | if let pat '=' nostruct_expr block else_expr { 346 } else_expr :: { Int } : else block { 347 } | else if_expr { 348 } | {- empty -} { 349 } arms :: { Int } : ntArm { 350 } | ntArm arms { 351 } | many(outer_attribute) sep_by1(pat,'|') arm_guard '=>' expr_arms { 352 } arm_guard :: { Int } : {- empty -} { 353 } | if expr { 354 } comma_arms :: { Int } : {- empty -} { 355 } | ',' { 356 } | ',' arms { 357 } expr_arms :: { Int } : nonblock_expr comma_arms { 358 } | blockpostfix_expr comma_arms { 359 } | vis_safety_block comma_arms { 360 } | vis_safety_block arms { 361 } | block_like_expr comma_arms { 362 } | block_like_expr arms { 363 } paren_expr :: { Int } : '(' ')' { 364 } | '(' inner_attrs ')' { 365 } | '(' expr ')' { 366 } | '(' inner_attrs expr ')' { 367 } | '(' expr ',' ')' { 368 } | '(' inner_attrs expr ',' ')' { 369 } | '(' expr ',' sep_by1T(expr,',') ')' { 370 } | '(' inner_attrs expr ',' sep_by1T(expr,',') ')' { 371 } lambda_expr_block :: { Int } : move lambda_args '->' ty_no_plus block { 372 } | lambda_args '->' ty_no_plus block { 373 } lambda_args :: { Int } : '||' { 374 } | '|' sep_byT(lambda_arg,',') '|' { 375 } struct_expr :: { Int } : expr_path '{' '..' expr '}' { 376 } | expr_path '{' inner_attrs '..' expr '}' { 377 } | expr_path '{' sep_by1(field,',') ',' '..' expr '}' { 378 } | expr_path '{' inner_attrs sep_by1(field,',') ',' '..' expr '}' { 379 } | expr_path '{' sep_byT(field,',') '}' { 380 } | expr_path '{' inner_attrs sep_byT(field,',') '}' { 381 } field :: { Int } : ident ':' expr { 382 } | ident { 383 } vis_safety_block :: { Int } : pub_or_inherited safety inner_attrs_block { 384 } vis_union_nonblock_expr :: { Int } : union_expr { 385 } | left_gen_expression(vis_union_nonblock_expr, expr, expr) { 386 } union_expr :: { Int } : pub_or_inherited union { 387 } stmt :: { Int } : ntStmt { 388 } | many(outer_attribute) let pat ':' ty initializer ';' { 389 } | many(outer_attribute) let pat initializer ';' { 390 } | many(outer_attribute) nonblock_expr ';' { 391 } | many(outer_attribute) block_like_expr ';' { 392 } | many(outer_attribute) blockpostfix_expr ';' { 393 } | many(outer_attribute) vis_union_nonblock_expr ';' { 394 } | many(outer_attribute) block_like_expr %prec NOSEMI { 395 } | many(outer_attribute) vis_safety_block ';' { 396 } | many(outer_attribute) vis_safety_block %prec NOSEMI { 397 } | gen_item(pub_or_inherited) { 398 } | many(outer_attribute) expr_path '!' ident '[' token_stream ']' ';' { 399 } | many(outer_attribute) expr_path '!' ident '(' token_stream ')' ';' { 400 } | many(outer_attribute) expr_path '!' ident '{' token_stream '}' { 401 } pub_or_inherited :: { Int } : pub %prec VIS { 402 } | {- empty -} %prec VIS { 403 } stmtOrSemi :: { Int } : ';' { 404 } | stmt { 405 } stmts_possibly_no_semi :: { Int } : stmtOrSemi stmts_possibly_no_semi { 406 } | stmtOrSemi { 407 } | many(outer_attribute) nonblock_expr { 408 } | many(outer_attribute) blockpostfix_expr { 409 } initializer :: { Int } : '=' expr { 410 } | {- empty -} { 411 } block :: { Int } : ntBlock { 412 } | '{' '}' { 413 } | '{' stmts_possibly_no_semi '}' { 414 } inner_attrs_block :: { Int } : block { 415 } | '{' inner_attrs '}' { 416 } | '{' inner_attrs stmts_possibly_no_semi '}' { 417 } gen_item(vis) :: { Int } : many(outer_attribute) vis static ident ':' ty '=' expr ';' { 418 } | many(outer_attribute) vis static mut ident ':' ty '=' expr ';' { 419 } | many(outer_attribute) vis const ident ':' ty '=' expr ';' { 420 } | many(outer_attribute) vis type ident generics where_clause '=' ty ';' { 421 } | many(outer_attribute) vis use view_path ';' { 422 } | many(outer_attribute) vis safety extern crate ident ';' { 423 } | many(outer_attribute) vis safety extern crate ident as ident ';' { 424 } | many(outer_attribute) vis const safety fn ident generics fn_decl(arg_named) where_clause inner_attrs_block { 425 } | many(outer_attribute) vis safety extern abi fn ident generics fn_decl(arg_named) where_clause inner_attrs_block { 426 } | many(outer_attribute) vis safety fn ident generics fn_decl(arg_named) where_clause inner_attrs_block { 427 } | many(outer_attribute) vis mod ident ';' { 428 } | many(outer_attribute) vis mod ident '{' many(mod_item) '}' { 429 } | many(outer_attribute) vis mod ident '{' inner_attrs many(mod_item) '}' { 430 } | many(outer_attribute) vis safety extern abi '{' many(foreign_item) '}' { 431 } | many(outer_attribute) vis safety extern abi '{' inner_attrs many(foreign_item) '}' { 432 } | many(outer_attribute) vis struct ident generics struct_decl_args { 433 } | many(outer_attribute) vis union ident generics struct_decl_args { 434 } | many(outer_attribute) vis enum ident generics where_clause '{' sep_byT(enum_def,',') '}' { 435 } | many(outer_attribute) vis safety trait ident generics where_clause '{' many(trait_item) '}' { 437 } | many(outer_attribute) vis safety impl generics ty_prim where_clause '{' impl_items '}' { 438 } | many(outer_attribute) vis default safety impl generics ty_prim where_clause '{' impl_items '}' { 439 } | many(outer_attribute) vis safety impl generics '(' ty_no_plus ')' where_clause '{' impl_items '}' { 440 } | many(outer_attribute) vis default safety impl generics '(' ty_no_plus ')' where_clause '{' impl_items '}' { 441 } | many(outer_attribute) vis safety impl generics '!' trait_ref for ty where_clause '{' impl_items '}' { 442 } | many(outer_attribute) vis default safety impl generics '!' trait_ref for ty where_clause '{' impl_items '}' { 443 } | many(outer_attribute) vis safety impl generics trait_ref for ty where_clause '{' impl_items '}' { 444 } | many(outer_attribute) vis default safety impl generics trait_ref for ty where_clause '{' impl_items '}' { 445 } | many(outer_attribute) vis safety impl generics trait_ref for '..' '{' '}' { 446 } mod_item :: { Int } : ntItem { 447 } | gen_item(vis) { 448 } | many(outer_attribute) expr_path '!' ident '[' token_stream ']' ';' { 449 } | many(outer_attribute) expr_path '!' '[' token_stream ']' ';' { 450 } | many(outer_attribute) expr_path '!' ident '(' token_stream ')' ';' { 451 } | many(outer_attribute) expr_path '!' '(' token_stream ')' ';' { 452 } | many(outer_attribute) expr_path '!' ident '{' token_stream '}' { 453 } | many(outer_attribute) expr_path '!' '{' token_stream '}' { 454 } foreign_item :: { Int } : many(outer_attribute) vis static ident ':' ty ';' { 455 } | many(outer_attribute) vis static mut ident ':' ty ';' { 456 } | many(outer_attribute) vis fn ident generics fn_decl(arg_named) where_clause ';' { 457 } generics :: { Int } : ntGenerics { 458 } | '<' sep_by1(lifetime_def,',') ',' sep_by1T(ty_param,',') gt '>' { 459 } | '<' sep_by1T(lifetime_def,',') gt '>' { 460 } | '<' sep_by1T(ty_param,',') gt '>' { 461 } | '<' gt '>' { 462 } | {- empty -} { 463 } ty_param :: { Int } : many(outer_attribute) ident { 464 } | many(outer_attribute) ident ':' sep_by1T(ty_param_bound_mod,'+') { 465 } | many(outer_attribute) ident '=' ty { 466 } | many(outer_attribute) ident ':' sep_by1T(ty_param_bound_mod,'+') '=' ty { 467 } struct_decl_args :: { Int } : where_clause ';' { 468 } | where_clause '{' sep_byT(struct_decl_field,',') '}' { 469 } | '(' sep_byT(tuple_decl_field,',') ')' where_clause ';' { 470 } struct_decl_field :: { Int } : many(outer_attribute) vis ident ':' ty { 471 } tuple_decl_field :: { Int } : many(outer_attribute) vis ty { 472 } enum_def :: { Int } : many(outer_attribute) ident '{' sep_byT(struct_decl_field,',') '}' { 473 } | many(outer_attribute) ident '(' sep_byT(tuple_decl_field,',') ')' { 474 } | many(outer_attribute) ident initializer { 475 } where_clause :: { Int } : {- empty -} { 476 } | ntWhereClause { 477 } | where sep_by(where_predicate,',') %prec WHERE { 478 } | where sep_by1(where_predicate,',') ',' %prec WHERE { 479 } where_predicate :: { Int } : lifetime { 480 } | lifetime ':' sep_by1T(lifetime,'+') { 481 } | no_for_ty %prec EQ { 482 } | no_for_ty '=' ty { 483 } | no_for_ty ':' sep_by1T(ty_param_bound_mod,'+') { 484 } | for_lts no_for_ty { 485 } | for_lts no_for_ty ':' sep_by1T(ty_param_bound_mod,'+') { 486 } impl_items :: { Int } : many(impl_item) { 487 } | inner_attrs many(impl_item) { 488 } impl_item :: { Int } : many(outer_attribute) vis def type ident '=' ty ';' { 489 } | many(outer_attribute) vis def const ident ':' ty '=' expr ';' { 490 } | many(outer_attribute) def mod_mac { 491 } trait_item :: { Int } : ntTraitItem { 494 } | many(outer_attribute) const ident ':' ty initializer ';' { 495 } | many(outer_attribute) mod_mac { 496 } | many(outer_attribute) type ident ';' { 497 } | many(outer_attribute) type ident '=' ty ';' { 498 } | many(outer_attribute) type ident ':' sep_by1T(ty_param_bound_mod,'+') ';' { 499 } safety :: { Int } : {- empty -} { 503 } | unsafe { 504 } ext_abi :: { Int } : {- empty -} { 505 } | extern abi { 506 } vis :: { Int } : {- empty -} %prec VIS { 507 } | pub %prec VIS { 508 } | pub '(' crate ')' { 509 } | pub '(' in mod_path ')' { 510 } | pub '(' super ')' { 511 } | pub '(' self ')' { 512 } def :: { Int } : {- empty -} %prec DEF { 513 } | default { 514 } view_path :: { Int } : '::' sep_by1(self_or_ident,'::') { 515 } | '::' sep_by1(self_or_ident,'::') as ident { 516 } | '::' '*' { 517 } | '::' sep_by1(self_or_ident,'::') '::' '*' { 518 } | '::' sep_by1(self_or_ident,'::') '::' '{' sep_byT(plist,',') '}' { 519 } | '::' '{' sep_byT(plist,',') '}' { 520 } | sep_by1(self_or_ident,'::') { 521 } | sep_by1(self_or_ident,'::') as ident { 522 } | '*' { 523 } | sep_by1(self_or_ident,'::') '::' '*' { 524 } | sep_by1(self_or_ident,'::') '::' '{' sep_byT(plist,',') '}' { 525 } | '{' sep_byT(plist,',') '}' { 526 } self_or_ident :: { Int } : ident { 527 } | self { 528 } | Self { 529 } | super { 530 } plist :: { Int } : self_or_ident { 531 } | self_or_ident as ident { 532 } expr_mac :: { Int } : expr_path '!' '[' token_stream ']' { 533 } | expr_path '!' '(' token_stream ')' { 534 } ty_mac :: { Int } : ty_path '!' '[' token_stream ']' { 535 } | ty_path '!' '{' token_stream '}' { 536 } | ty_path '!' '(' token_stream ')' { 537 } mod_mac :: { Int } : mod_path '!' '[' token_stream ']' ';' { 538 } | mod_path '!' '{' token_stream '}' { 539 } | mod_path '!' '(' token_stream ')' ';' { 540 } token_stream :: { Int } : {- empty -} { 541 } | some(token_tree) { 542 } token_tree :: { Int } : ntTT { 543 } | '(' token_stream ')' { 544 } | '{' token_stream '}' { 545 } | '[' token_stream ']' { 546 } | token { 547 } token :: { Int } : '=' { 548 } | '<' { 549 } | '>' { 550 } | '!' { 551 } | '~' { 552 } | '-' { 553 } | '/' { 554 } | '+' { 555 } | '*' { 556 } | '%' { 557 } | '^' { 558 } | '&' { 559 } | '|' { 560 } | '<<=' { 561 } | '>>=' { 562 } | '-=' { 563 } | '&=' { 564 } | '|=' { 565 } | '+=' { 566 } | '*=' { 567 } | '/=' { 568 } | '^=' { 569 } | '%=' { 571 } | '||' { 572 } | '&&' { 573 } | '==' { 574 } | '!=' { 575 } | '<=' { 576 } | '>=' { 577 } | '<<' { 578 } | '>>' { 579 } | '@' { 580 } | '...' { 581 } | '..' { 582 } | '.' { 583 } | ',' { 584 } | ';' { 585 } | '::' { 586 } | ':' { 587 } | '->' { 588 } | '<-' { 589 } | '=>' { 590 } | '#' { 591 } | '$' { 592 } | '?' { 593 } | '#!' { 594 } | byte { 595 } | char { 596 } | int { 597 } | float { 598 } | str { 599 } | byteStr { 600 } | rawStr { 601 } | rawByteStr { 602 } | as { 603 } | box { 604 } | break { 605 } | const { 606 } | continue { 607 } | crate { 608 } | else { 609 } | enum { 610 } | extern { 611 } | false { 612 } | fn { 613 } | for { 614 } | if { 615 } | impl { 616 } | in { 617 } | let { 618 } | loop { 619 } | match { 620 } | mod { 621 } | move { 622 } | mut { 623 } | pub { 624 } | ref { 625 } | return { 626 } | Self { 627 } | self { 628 } | static { 629 } | struct { 630 } | super { 631 } | trait { 632 } | true { 633 } | type { 634 } | unsafe { 635 } | use { 636 } | where { 637 } | while { 638 } | abstract { 639 } | alignof { 640 } | become { 641 } | do { 642 } | final { 643 } | macro { 644 } | offsetof { 645 } | override { 646 } | priv { 647 } | proc { 648 } | pure { 649 } | sizeof { 650 } | typeof { 651 } | unsized { 652 } | virtual { 653 } | yield { 654 } | default { 655 } | union { 656 } | catch { 657 } | outerDoc { 658 } | innerDoc { 659 } | IDENT { 660 } | '_' { 661 } | LIFETIME { 662 } export_attribute :: { Int } : inner_attribute { 663 } | outer_attribute { 664 } export_block :: { Int } : ntBlock { 665 } | safety '{' '}' { 666 } | safety '{' stmts_possibly_no_semi '}' { 667 } export_ty :: { Int } : ty { 668 } | impl_ty { 669 } { type P a = String -> Either String (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = case p s of Left m -> Left m Right (x,s') -> f x s' returnP :: a -> P a returnP x s = Right (x,s) parseError :: Show b => b -> P a parseError b _ = Left ("Syntax error: the symbol `" ++ show b ++ "' does not fit here") data Token = Equal | Less | Greater | Ampersand | Pipe | Exclamation | Tilde | Plus | Minus | Star | Slash | Percent | Caret | GreaterEqual | GreaterGreaterEqual | AmpersandAmpersand | PipePipe | LessLess | GreaterGreater | EqualEqual | NotEqual | LessEqual | LessLessEqual | MinusEqual | AmpersandEqual | PipeEqual | PlusEqual | StarEqual | SlashEqual | CaretEqual | PercentEqual | At | Dot | DotDot | DotDotDot | Comma | Semicolon | Colon | ModSep | RArrow | LArrow | FatArrow | Pound | Dollar | Question | OpenParen | OpenBracket | OpenBrace | CloseParen | CloseBracket | CloseBrace | IdentTok String | Underscore | LifetimeTok String | Space | InnerDoc | OuterDoc | Shebang | Eof | ByteTok String | CharTok String | IntegerTok String | FloatTok String | StrTok String | StrRawTok String | ByteStrTok String | ByteStrRawTok String | Interpolated Int deriving Show -- This is an intentionally simplified tokenizer lexNonSpace :: P Token lexNonSpace "" = Right (Eof, "") lexNonSpace ('.':cs) = Right (Dot, cs) lexNonSpace ('+':cs) = Right (Plus, cs) lexNonSpace (';':cs) = Right (Semicolon, cs) lexNonSpace (',':cs) = Right (Comma, cs) lexNonSpace ('=':cs) = Right (Equal, cs) lexNonSpace ('{':cs) = Right (OpenBrace, cs) lexNonSpace ('}':cs) = Right (CloseBrace, cs) lexNonSpace ('(':cs) = Right (OpenParen, cs) lexNonSpace (')':cs) = Right (CloseParen, cs) lexNonSpace (c:cs) | isSpace c = lexNonSpace cs | isNumber c = let (tok,cs') = span isNumber (c:cs) in Right (IntegerTok tok, cs') | isAlpha c = let (tok,cs') = span isAlphaNum (c:cs) in Right (IdentTok tok, cs') | otherwise = Left ("Unexpected character: `" ++ [c] ++ "'") main = case parseStmt "union.1 + 2;" of Right (394, "") -> pure () _ -> exitWith (ExitFailure 1) } happy-2.1.7/tests/issue94.y0000644000000000000000000000107307346545000013675 0ustar0000000000000000-- See for more information %name parse prod %tokentype { Token } %monad { P } { bindP } { returnP } %error { error "parse error" } %lexer { lexer } { EOF } %token IDENT { Identifier $$ } %% prod : IDENT { () } { data Token = EOF | Identifier String type P a = String -> (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = let (x,s') = p s in f x s' returnP :: a -> P a returnP = (,) lexer :: (Token -> P a) -> P a lexer cont s = cont (case s of { "" -> EOF; _ -> Identifier s }) "" main = return () } happy-2.1.7/tests/issue95.y0000644000000000000000000000112507346545000013674 0ustar0000000000000000-- See for more information %name parse prod %tokentype { Token } %monad { P } { bindP } { returnP } %error { error "parse error" } %lexer { lexer } { EOF } %token IDENT { Identifier $$ } %% prod :: { () } : IDENT {%% \_ -> returnP () } { data Token = EOF | Identifier String type P a = String -> (a, String) bindP :: P a -> (a -> P b) -> P b bindP p f s = let (x,s') = p s in f x s' returnP :: a -> P a returnP = (,) lexer :: (Token -> P a) -> P a lexer cont s = cont (case s of { "" -> EOF; _ -> Identifier s }) "" main = pure () } happy-2.1.7/tests/monad001.y0000644000000000000000000000372607346545000013716 0ustar0000000000000000-- Testing %monad without %lexer, using the IO monad. { module Main where import System.IO import Data.Char } %name calc %tokentype { Token } %token num { TokenNum $$ } '+' { TokenPlus } '-' { TokenMinus } '*' { TokenTimes } '/' { TokenDiv } '^' { TokenExp } '\n' { TokenEOL } '(' { TokenOB } ')' { TokenCB } %left '-' '+' %left '*' %nonassoc '/' %left NEG -- negation--unary minus %right '^' -- exponentiation %monad { IO } { (>>=) } { return } %% input : {- empty string -} { () } | input line { $1 } line : '\n' { () } | exp '\n' {% hPutStr stdout (show $1) } exp : num { $1 } | exp '+' exp { $1 + $3 } | exp '-' exp { $1 - $3 } | exp '*' exp { $1 * $3 } | exp '/' exp { $1 / $3 } | '-' exp %prec NEG { -$2 } -- | exp '^' exp { $1 ^ $3 } | '(' exp ')' { $2 } { main = do calc (lexer "1 + 2 * 3 / 4\n") {- -- check that non-associative operators can't be used together r <- try (calc (lexer "1 / 2 / 3")) case r of Left e -> return () Right _ -> ioError (userError "fail!") -} data Token = TokenExp | TokenEOL | TokenNum Double | TokenPlus | TokenMinus | TokenTimes | TokenDiv | TokenOB | TokenCB -- and a simple lexer that returns this datastructure. lexer :: String -> [Token] lexer [] = [] lexer ('\n':cs) = TokenEOL : lexer cs lexer (c:cs) | isSpace c = lexer cs | isDigit c = lexNum (c:cs) lexer ('+':cs) = TokenPlus : lexer cs lexer ('-':cs) = TokenMinus : lexer cs lexer ('*':cs) = TokenTimes : lexer cs lexer ('/':cs) = TokenDiv : lexer cs lexer ('^':cs) = TokenExp : lexer cs lexer ('(':cs) = TokenOB : lexer cs lexer (')':cs) = TokenCB : lexer cs lexNum cs = TokenNum (read num) : lexer rest where (num,rest) = span isNum cs isNum c = isDigit c || c == '.' happyError tokens = ioError (userError "parse error") } happy-2.1.7/tests/monad002.ly0000644000000000000000000001041507346545000014064 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > {-# OPTIONS_GHC -fglasgow-exts #-} > -- -fglasgow-exts required because P is a type synonym, and Happy uses it > -- unsaturated. > import Data.Char > } > %name calc > %tokentype { Token } > %monad { P } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> ParseOk (Let l $2 $4 $6) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P a = String -> Int -> ParseResult a > thenP :: P a -> (a -> P b) -> P b > m `thenP` k = \s l -> > case m s l of > ParseFail s -> ParseFail s > ParseOk a -> k a s l > returnP :: a -> P a > returnP a = \s l -> ParseOk a ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > -- lexer :: (Token -> Parse) -> Parse > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: String -> Exp > runCalc s = case calc s 1 of > ParseOk e -> e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > type Parse = P Exp > calc :: Parse The next function is called when a parse error is detected. It has the same type as the top-level parse function. > -- happyError :: Parse > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = case runCalc "1 + 2 + 3" of { > (Exp1 (Plus (Plus (Term (Factor (Int 1))) (Factor (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 * 2 + 3" of { > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) (Factor (Int 3)))) -> > case runCalc "1 + 2 * 3" of { > (Exp1 (Plus (Term (Factor (Int 1))) (Times (Factor (Int 2)) (Int 3)))) -> > case runCalc "let x = 2 in x * (x - 2)" of { > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) (Exp1 (Term (Times (Factor (Var "x")) (Brack (Exp1 (Minus (Term (Factor (Var "x"))) (Factor (Int 2))))))))) -> print "Test works\n"; > _ -> quit } ; _ -> quit } ; _ -> quit } ; _ -> quit } > quit = print "Test failed\n" > } happy-2.1.7/tests/monaderror-explist.y0000644000000000000000000000371007346545000016226 0ustar0000000000000000{ {-# LANGUAGE FunctionalDependencies #-} {-# LANGUAGE FlexibleInstances #-} -- For ancient GHC 7.0.4 {-# LANGUAGE MultiParamTypeClasses #-} module Main where import Data.Char import Control.Monad (when) import System.Exit import System.Environment (getProgName) import Data.List (isPrefixOf) } %name parseFoo %tokentype { Token } %errorhandlertype explist %error { handleErrorExpList } %monad { ParseM } { (>>=) } { return } %token 'S' { TokenSucc } 'Z' { TokenZero } 'T' { TokenTest } %% Exp : 'Z' { 0 } | 'T' 'Z' Exp { $3 + 1 } | 'S' Exp { $2 + 1 } { type ParseM a = Either ParseError a data ParseError = ParseError (Maybe (Token, [String])) | StringError String deriving (Eq,Show) instance Error ParseError where strMsg = StringError data Token = TokenSucc | TokenZero | TokenTest deriving (Eq,Show) handleErrorExpList :: ([Token], [String]) -> ParseM a handleErrorExpList ([], _) = throwError $ ParseError Nothing handleErrorExpList (ts, explist) = throwError $ ParseError $ Just $ (head ts, explist) lexer :: String -> [Token] lexer [] = [] lexer (c:cs) | isSpace c = lexer cs | c == 'S' = TokenSucc:(lexer cs) | c == 'Z' = TokenZero:(lexer cs) | c == 'T' = TokenTest:(lexer cs) | otherwise = error "lexer error" main :: IO () main = do test "Z Z" $ Left (ParseError (Just (TokenZero,[]))) test "T S" $ Left (ParseError (Just (TokenSucc,["'Z'"]))) where test inp exp = do putStrLn $ "testing " ++ inp let tokens = lexer inp when (parseFoo tokens /= exp) $ do print (parseFoo tokens) exitWith (ExitFailure 1) --- class Error a where noMsg :: a noMsg = strMsg "" strMsg :: String -> a class Monad m => MonadError e m | m -> e where throwError :: e -> m a instance MonadError e (Either e) where throwError = Left } happy-2.1.7/tests/monaderror.y0000644000000000000000000000275507346545000014550 0ustar0000000000000000{ {-# LANGUAGE FunctionalDependencies #-} {-# LANGUAGE FlexibleInstances #-} -- For ancient GHC 7.0.4 {-# LANGUAGE MultiParamTypeClasses #-} module Main where import Control.Monad (when) import Data.Char import System.Exit } %name parseFoo %tokentype { Token } %error { handleError } %monad { ParseM } { (>>=) } { return } %token 'S' { TokenSucc } 'Z' { TokenZero } %% Exp : 'Z' { 0 } | 'S' Exp { $2 + 1 } { type ParseM a = Either ParseError a data ParseError = ParseError (Maybe Token) | StringError String deriving (Eq,Show) instance Error ParseError where strMsg = StringError data Token = TokenSucc | TokenZero deriving (Eq,Show) handleError :: [Token] -> ParseM a handleError [] = throwError $ ParseError Nothing handleError ts = throwError $ ParseError $ Just $ head ts lexer :: String -> [Token] lexer [] = [] lexer (c:cs) | isSpace c = lexer cs | c == 'S' = TokenSucc:(lexer cs) | c == 'Z' = TokenZero:(lexer cs) | otherwise = error "lexer error" main :: IO () main = do let tokens = lexer "S S" when (parseFoo tokens /= Left (ParseError Nothing)) $ do print (parseFoo tokens) exitWith (ExitFailure 1) --- class Error a where noMsg :: a noMsg = strMsg "" strMsg :: String -> a class Monad m => MonadError e m | m -> e where throwError :: e -> m a instance MonadError e (Either e) where throwError = Left } happy-2.1.7/tests/precedence001.ly0000644000000000000000000000310007346545000015053 0ustar0000000000000000This module demonstrates a Happy bug (in version <= 1.10). > { > module Main where > import System.IO > import Control.Exception as Exception > } > > %name parse > > %tokentype { Tok } > %token > '+' { Plus } > '-' { Minus } > int { Num $$ } > > %nonassoc '+' '-' > > %% Ambiguous grammar. > E : E '+' E { Plus' $1 $3 } > | E '-' E { Minus' $1 $3 } > | int { Num' $1 } > { > happyError :: [Tok] -> a > happyError s = error (concatMap show s) > > data Tok = Plus | Minus | Num Int deriving Show > > data Syn = Plus' Syn Syn | Minus' Syn Syn | Num' Int deriving Show All the examples below should fail. None of them does so under Happy v1.8, and only the first one under Happy v1.9 and v1.10. > test1 = parse tokens1 > test2 = parse tokens2 > test3 = parse tokens3 > > tokens1 = [Num 6, Plus, Num 7, Plus, Num 8] > tokens2 = [Num 6, Plus, Num 7, Minus, Num 8] > tokens3 = [Num 6, Minus, Num 7, Minus, Num 8] The generated info files seem correct, so there is probably something wrong with the table generation. These errors only show up when one uses Happy with the -a flag (and only that flag). I know that it's no point in using just that flag, but I happened to be doing so while trying the code out with Hugs. (Hugs didn't like the code generated with GHC extensions, -gac.) > main = do > Exception.try (print test1 >> fail "Test failed.") :: IO (Either ErrorCall ()) > Exception.try (print test2 >> fail "Test failed.") :: IO (Either ErrorCall ()) > Exception.try (print test3 >> fail "Test failed.") :: IO (Either ErrorCall ()) > } happy-2.1.7/tests/precedence002.y0000644000000000000000000000132107346545000014703 0ustar0000000000000000-- This module demonstrates a bug in the original 1.11 release of Happy. { module Main where import System.IO import Control.Exception as Exception } %name parse %tokentype { Tok } %token '+' { Plus } '/' { Divide } int { Num $$ } %left '+' %left '*' %nonassoc '/' %% E : E '+' E { Plus' $1 $3 } | E '/' E { Divide' $1 $3 } | int { Num' $1 } { happyError :: [Tok] -> a happyError s = error (concatMap show s) data Tok = Plus | Divide | Num Int deriving Show data Syn = Plus' Syn Syn | Divide' Syn Syn | Num' Int deriving Show -- due to a bug in conflict resolution, this caused a parse error: tokens1 = [Num 6, Divide, Num 7, Plus, Num 8] main = print (parse tokens1) } happy-2.1.7/tests/rank2.y0000644000000000000000000000076007346545000013407 0ustar0000000000000000{ {-# LANGUAGE RankNTypes #-} {-# LANGUAGE GADTs #-} module Main where import System.IO import Data.Char } %name calc %tokentype { Token } %token tok { Token } %monad { IO } { (>>=) } { return } %% ib :: { (Int, Double, Bool) } : f n { ($1 $2, $1 $2, $1 True) } f :: { forall a. a -> a } : { id } n :: { forall a. Num a => a } : { 5 } { main = calc [] >>= print data Token = Token lexer :: String -> [Token] lexer _ = [] happyError tokens = ioError (userError "parse error") } happy-2.1.7/tests/shift01.y0000644000000000000000000000072707346545000013653 0ustar0000000000000000-- Testing the %shift directive { module Main where import System.IO import Data.Char } %expect 0 -- We must resolve the conflicts with %shift %name group_a %tokentype { Token } %token 'A' { A } %% exp : exp 'A' %shift { $1 ++ ",A" } | exp 'A' 'A' { $1 ++ ",2A" } | { "S" } { main = if group_a [A, A, A] == "S,2A,A" then return () else error "bad parse" data Token = A happyError _ = error "parse error" } happy-2.1.7/tests/test_rules.y0000644000000000000000000000300207346545000014553 0ustar0000000000000000{ import Control.Monad(when) import System.Exit } %monad { Maybe } { (>>=) } { return } %tokentype { Char } %token 'a' { 'a' } 'b' { 'b' } %name test1 test1 %name test2 test2 %% test1 : sepBy('a','b') { $1 } test2 : endBy('a','b') { $1 } many_rev1(p) : p { [$1] } | many_rev1(p) p { $2 : $1 } many1(p) : many_rev1(p) { reverse $1 } many(p) : many1(p) { $1 } | { [] } optional(p) : p { Just $1 } | { Nothing } sepR(p,q) : p q { $2 } sepL(p,q) : p q { $1 } sepBy1(p,q) : p many(sepR(q,p)) { $1 : $2 } sepBy(p,q) : sepBy1(p,q) { $1 } | { [] } endBy(p,q) : many (sepL(p,q)) { $1 } endBy1(p,q) : many1 (sepL(p,q)) { $1 } { happyError _ = Nothing tests = [ test1 "" == Just "" , test1 "a" == Just "a" , test1 "ab" == Nothing , test1 "aba" == Just "aa" , test1 "abab" == Nothing , test2 "" == Just "" , test2 "a" == Nothing , test2 "ab" == Just "a" , test2 "aba" == Nothing , test2 "abab" == Just "aa" ] main = do let failed = filter (not . snd) (zip [0..] tests) when (not (null failed)) $ do putStrLn ("Failed tests: " ++ show (map fst failed)) exitFailure putStrLn "Tests passed." } happy-2.1.7/tests/typeclass_monad001.y0000644000000000000000000000414407346545000016000 0ustar0000000000000000-- Testing %monad without %lexer, using the IO monad. { module Main where import System.IO import Data.Char } %name calc %tokentype { Token } %token num { TokenNum $$ } '+' { TokenPlus } '-' { TokenMinus } '*' { TokenTimes } '/' { TokenDiv } '^' { TokenExp } '\n' { TokenEOL } '(' { TokenOB } ')' { TokenCB } %left '-' '+' %left '*' %nonassoc '/' %left NEG -- negation--unary minus %right '^' -- exponentiation %monad { (MonadIO m) } { m } { (>>=) } { return } %% input : {- empty string -} { () } | input line { $1 } line : '\n' { () } | exp '\n' {% hPutStr stdout (show $1) } exp : num { $1 } | exp '+' exp { $1 + $3 } | exp '-' exp { $1 - $3 } | exp '*' exp { $1 * $3 } | exp '/' exp { $1 / $3 } | '-' exp %prec NEG { -$2 } -- | exp '^' exp { $1 ^ $3 } | '(' exp ')' { $2 } { main = do calc (lexer "1 + 2 * 3 / 4\n") {- -- check that non-associative operators can't be used together r <- try (calc (lexer "1 / 2 / 3")) case r of Left e -> return () Right _ -> ioError (userError "fail!") -} data Token = TokenExp | TokenEOL | TokenNum Double | TokenPlus | TokenMinus | TokenTimes | TokenDiv | TokenOB | TokenCB -- and a simple lexer that returns this datastructure. lexer :: String -> [Token] lexer [] = [] lexer ('\n':cs) = TokenEOL : lexer cs lexer (c:cs) | isSpace c = lexer cs | isDigit c = lexNum (c:cs) lexer ('+':cs) = TokenPlus : lexer cs lexer ('-':cs) = TokenMinus : lexer cs lexer ('*':cs) = TokenTimes : lexer cs lexer ('/':cs) = TokenDiv : lexer cs lexer ('^':cs) = TokenExp : lexer cs lexer ('(':cs) = TokenOB : lexer cs lexer (')':cs) = TokenCB : lexer cs lexNum cs = TokenNum (read num) : lexer rest where (num,rest) = span isNum cs isNum c = isDigit c || c == '.' happyError tokens = liftIO (ioError (userError "parse error")) -- vendored in parts of mtl class Monad m => MonadIO m where liftIO :: IO a -> m a instance MonadIO IO where liftIO = id } happy-2.1.7/tests/typeclass_monad002.ly0000644000000000000000000001172207346545000016155 0ustar0000000000000000----------------------------------------------------------------------------- Test for monadic Happy Parsers, Simon Marlow 1996. > { > {-# OPTIONS_GHC -fglasgow-exts #-} > -- -fglasgow-exts required because P is a type synonym, and Happy uses it > -- unsaturated. > import Data.Char > } > %name calc > %tokentype { Token } > %monad { (Monad m) } { P m } { thenP } { returnP } > %lexer { lexer } { TokenEOF } > %token > let { TokenLet } > in { TokenIn } > int { TokenInt $$ } > var { TokenVar $$ } > '=' { TokenEq } > '+' { TokenPlus } > '-' { TokenMinus } > '*' { TokenTimes } > '/' { TokenDiv } > '(' { TokenOB } > ')' { TokenCB } > %% > Exp :: {Exp} > : let var '=' Exp in Exp {% \s l -> return (ParseOk (Let l $2 $4 $6)) } > | Exp1 { Exp1 $1 } > > Exp1 :: {Exp1} > : Exp1 '+' Term { Plus $1 $3 } > | Exp1 '-' Term { Minus $1 $3 } > | Term { Term $1 } > > Term :: {Term} > : Term '*' Factor { Times $1 $3 } > | Term '/' Factor { Div $1 $3 } > | Factor { Factor $1 } > > Factor :: {Factor} > : int { Int $1 } > | var { Var $1 } > | '(' Exp ')' { Brack $2 } > { ----------------------------------------------------------------------------- The monad serves three purposes: * it passes the input string around * it passes the current line number around * it deals with success/failure. > data ParseResult a > = ParseOk a > | ParseFail String > type P m a = String -> Int -> m (ParseResult a) > thenP :: Monad m => P m a -> (a -> P m b) -> P m b > m `thenP` k = \s l -> > do > res <- m s l > case res of > ParseFail s -> return (ParseFail s) > ParseOk a -> k a s l > returnP :: Monad m => a -> P m a > returnP a = \s l -> return (ParseOk a) ----------------------------------------------------------------------------- Now we declare the datastructure that we are parsing. > data Exp = Let Int String Exp Exp | Exp1 Exp1 > data Exp1 = Plus Exp1 Term | Minus Exp1 Term | Term Term > data Term = Times Term Factor | Div Term Factor | Factor Factor > data Factor = Int Int | Var String | Brack Exp The datastructure for the tokens... > data Token > = TokenLet > | TokenIn > | TokenInt Int > | TokenVar String > | TokenEq > | TokenPlus > | TokenMinus > | TokenTimes > | TokenDiv > | TokenOB > | TokenCB > | TokenEOF .. and a simple lexer that returns this datastructure. > lexer :: Monad m => (Token -> P m a) -> P m a > lexer cont s = case s of > [] -> cont TokenEOF [] > ('\n':cs) -> \line -> lexer cont cs (line+1) > (c:cs) > | isSpace c -> lexer cont cs > | isAlpha c -> lexVar (c:cs) > | isDigit c -> lexNum (c:cs) > ('=':cs) -> cont TokenEq cs > ('+':cs) -> cont TokenPlus cs > ('-':cs) -> cont TokenMinus cs > ('*':cs) -> cont TokenTimes cs > ('/':cs) -> cont TokenDiv cs > ('(':cs) -> cont TokenOB cs > (')':cs) -> cont TokenCB cs > where > lexNum cs = cont (TokenInt (read num)) rest > where (num,rest) = span isDigit cs > lexVar cs = > case span isAlpha cs of > ("let",rest) -> cont TokenLet rest > ("in",rest) -> cont TokenIn rest > (var,rest) -> cont (TokenVar var) rest > runCalc :: Monad m => String -> m Exp > runCalc s = > do > res <- calc s 1 > case res of > ParseOk e -> return e > ParseFail s -> error s ----------------------------------------------------------------------------- The following functions should be defined for all parsers. This is the overall type of the parser. > type Parse m = P m Exp > calc :: Monad m => Parse m The next function is called when a parse error is detected. It has the same type as the top-level parse function. > happyError :: P m a > happyError = \s i -> error ( > "Parse error in line " ++ show (i::Int) ++ "\n") ----------------------------------------------------------------------------- Here we test our parser. > main = > do > res <- runCalc "1 + 2 + 3" > case res of > (Exp1 (Plus (Plus (Term (Factor (Int 1))) > (Factor (Int 2))) (Factor (Int 3)))) -> > do > res <- runCalc "1 * 2 + 3" > case res of > (Exp1 (Plus (Term (Times (Factor (Int 1)) (Int 2))) > (Factor (Int 3)))) -> > do > res <- runCalc "1 + 2 * 3" > case res of > (Exp1 (Plus (Term (Factor (Int 1))) > (Times (Factor (Int 2)) (Int 3)))) -> > do > res <- runCalc "let x = 2 in x * (x - 2)" > case res of > (Let 1 "x" (Exp1 (Term (Factor (Int 2)))) > (Exp1 (Term (Times (Factor (Var "x")) > (Brack (Exp1 (Minus (Term (Factor (Var "x"))) > (Factor (Int 2))))))))) -> > print "Test works\n" > _ -> quit > _ -> quit > _ -> quit > _ -> quit > quit = print "Test failed\n" > } happy-2.1.7/tests/typeclass_monad_lexer.y0000644000000000000000000001000307346545000016745 0ustar0000000000000000{ {-# LANGUAGE FunctionalDependencies #-} {-# LANGUAGE FlexibleInstances #-} -- For ancient GHC 7.0.4 {-# LANGUAGE MultiParamTypeClasses #-} {-# LANGUAGE UndecidableInstances #-} import Control.Monad (liftM, ap) import Control.Applicative as A } %name parse exp %tokentype { Token } %error { parseError } %monad { (MonadIO m) } { Parser m } %lexer { lexer } { EOF } %token ID { Id _ } NUM { Num _ } PLUS { Plus } MINUS { Minus } TIMES { Times } LPAREN { LParen } RPAREN { RParen } %% exp :: { AST } : exp PLUS prod { Sum $1 $3 } | prod { $1 } prod :: { AST } : prod TIMES neg { Prod $1 $3 } | neg { $1 } neg :: { AST } : MINUS neg { Neg $2 } | atom { $1 } atom :: { AST } : ID { let Id str = $1 in Var str } | NUM { let Num n = $1 in Lit n } | LPAREN exp RPAREN { $2 } { data Token = Plus | Minus | Times | LParen | RParen | Id String | Num Int | EOF deriving (Eq, Ord, Show) data AST = Sum AST AST | Prod AST AST | Neg AST | Var String | Lit Int deriving (Eq, Ord) type Parser m = ExceptT () (Lexer m) type Lexer m = StateT [Token] m parseError :: MonadIO m => Token -> Parser m a parseError tok = do liftIO (putStrLn ("Parse error at " ++ show tok)) throwError () lexer :: MonadIO m => (Token -> Parser m a) -> Parser m a lexer cont = do toks <- get case toks of [] -> cont EOF first : rest -> do put rest cont first parse :: (MonadIO m) => Parser m AST parser :: (MonadIO m) => [Token] -> m (Maybe AST) parser input = let run :: (MonadIO m) => Lexer m (Maybe AST) run = do res <- runExceptT parse case res of Left () -> return Nothing Right ast -> return (Just ast) in do (out, _) <- runStateT run input return out main :: IO () main = let input = [Id "x", Plus, Minus, Num 1, Times, LParen, Num 2, Plus, Id "y", RParen] expected = Sum (Var "x") (Prod (Neg (Lit 1)) (Sum (Lit 2) (Var "y"))) in do res <- parser input case res of Nothing -> print "Test failed\n" Just actual | expected == actual -> print "Test works\n" | otherwise -> print "Test failed\n" -- vendored in parts of mtl class Monad m => MonadIO m where liftIO :: IO a -> m a instance MonadIO IO where liftIO = id class Monad m => MonadState s m | m -> s where put :: s -> m () get :: m s newtype StateT s m a = StateT { runStateT :: s -> m (a, s) } instance Monad m => Functor (StateT s m) where fmap = liftM instance Monad m => A.Applicative (StateT s m) where pure = return (<*>) = ap instance Monad m => Monad (StateT s m) where return x = StateT $ \s -> return (x, s) m >>= k = StateT $ \s0 -> do (x, s1) <- runStateT m s0 runStateT (k x) s1 instance Monad m => MonadState s (StateT s m) where put s = StateT $ \_ -> return ((), s) get = StateT $ \s -> return (s, s) instance MonadIO m => MonadIO (StateT e m) where liftIO m = StateT $ \s -> liftM (\x -> (x, s)) (liftIO m) class Monad m => MonadError e m | m -> e where throwError :: e -> m a newtype ExceptT e m a = ExceptT { runExceptT :: m (Either e a) } instance Monad m => Functor (ExceptT e m) where fmap = liftM instance Monad m => A.Applicative (ExceptT e m) where pure = return (<*>) = ap instance Monad m => Monad (ExceptT e m) where return = ExceptT . return . Right m >>= k = ExceptT $ do x <- runExceptT m case x of Left e -> return (Left e) Right y -> runExceptT (k y) instance MonadState s m => MonadState s (ExceptT e m) where put s = ExceptT (liftM Right (put s)) get = ExceptT (liftM Right get) instance MonadIO m => MonadIO (ExceptT e m) where liftIO = ExceptT . liftM Right . liftIO instance Monad m => MonadError e (ExceptT e m) where throwError = ExceptT . return . Left }