happy-lib-2.1.7/0000755000000000000000000000000007346545000011577 5ustar0000000000000000happy-lib-2.1.7/ChangeLog.md0000644000000000000000000002520307346545000013752 0ustar0000000000000000# Revision history for Happy ## 2.1.7 * Add support for `{-# OPTIONS_HAPPY ... #-}` pragmas ([#342](https://github.com/haskell/happy/issues/342)). * Tested with GHC 8.0 - 9.12.2. The Haskell code generated by Happy is for GHC 8.0 and up. ## 2.1.6 * No longer emit `{-# LANGUAGE PartialTypeSignatures #-}` twice ([#339](https://github.com/haskell/happy/issues/339)). * No longer emit `import Data.Function` twice ([#341](https://github.com/haskell/happy/issues/341)). * Tested with GHC 8.0 - 9.12.2. The Haskell code generated by Happy is for GHC 8.0 and up. ## 2.1.5 Add `Data.Tuple` to `Happy_Prelude` in order to fix the `--debug` build (#330). ## 2.1.4 Move `Paths_happy_lib` into `tabular` to prevent a Cabal bug concerning --libsubdir (#328). It is likely that this release fixes `cabal v1-install happy` as well, which was broken since happy-2.0 (#315). ## 2.1.3 Auto-resolve shift/reduce conflicts involving the catch token. This was to support on going work in GHC to utilise the catch token. ## 2.1.2 Fix a breaking change (#325) introduced by the previous fix for #131. Prelude is no longer used by Happy. ## 2.1.1 This release fixes two breaking changes: * Properly qualify all uses of Prelude functions, fixing #131 * Bring back the old `%errorhandlertype` directive, the use of which is discouraged in favour of the "Reporting expected tokens" mechanism in Happy 2.1, accesible via `%error.expected`. ## 2.1 * Added `--numeric-version` CLI flag. * Documented and implemented the new feature "Resumptive parsing with ``catch``" * Documented (and reimplemented) the "Reporting expected tokens" feature (which turned to cause a breaking change in this release: #320) ## 2.0.2 The 2.0.1 release in turn exposed two more regressions: * Generated code uses PatternGuards without declaring it (#309) * Use of `happy-lib:*` syntax to depend on private library components triggered a bug in Cabal versions 3.0 and 3.2 (#311) This release fixes both. ## 2.0.1 The 2.0 release changed the indentation character from tabs to two spaces, triggering an unforced breaking change in GHC (#303). This release provides the fix by using eight spaces for indentation. ## 2.0 There are two main breaking changes in this release: 1. Removed non-array, non-GHC modes, so flags `-ag` are the default now and become no-ops. 2. Generated parsers now activate the language extension `-XNoStrictData` without which every use of a happy parser would lead to an immediate crash (#273). This causes us to drop support for GHC < 8.0. Furthermore, the project structure was modularized and a library `happy-lib` containing the implmentation of the `happy` executable was extracted. Quite similar to the situation with GHC vs. the GHC API, we expect that `happy` will continue to be a stable CLI tool with solid (if occasionally out of date) documentation, while the design, documentation and implementation of `happy-lib` is still in flux and use is only recommended to expert users. Other, more minor changes: * Revert the new bootstrapping system of 1.21.0 to mitigate build issues (#255, #274). * Encode action table offsets in 32 bit instead of 16 bit (#93, #199, #266). This increases the size of generated parsers a bit (about 250KB for GHC's parser), but also manages to generate parsers for grammars that were previously running into the size limit (#199). * The documentation has been converted to ReStructuredText, hosted at https://haskell-happy.readthedocs.io/en/latest/ (#226) * A few internal refactorings to the structure of generated code. ## 1.21.0 The main focus of this release was revamping the build system and bootstrapping. * The release no longer contains generated source code. Instead of simply requiring a pre-built bootstrap version of Happy in that event, we have a parser-combination-based implementation of enough of Happy to bootstrap the rest. (Currently, the bootstrap version is everything but attribute grammars, and thus sufficient for e.g. GHC, but this is subject to change.) The bootstrap version of Happy is then sufficient to build Happy once again with all features enabled. Note, this means users of attribute grammars will have to modify the way they build happy if they were previously building from Hackage relying on the pre-generated sources. * Rather than creating many "templates" at build time, there is a single combined template. Different implementations are chosen using CPP, as was already done within the templates before. * Some imports were tightened down, which may help building with newer versions of `base`. ## 1.20.1 * Fix for building with mtl-2.3.1 (GHC 9.6) ## 1.20.0 * Fix #121: the -i flag produces an .info file even if the `%expect` pragma is violated * Fix #131: qualify uses of Prelude functions in generated code * Fix #161: drop fewer parse items when generating an .info file * Introduce the `%shift` directive to resolve shift/reduce conflicts explicitly, useful in conjunction with `%expect 0` * Remove the deprecated build configuration flag `small_base` ## 1.19.12 * Fix for building with GHC 8.8.x * Move custom Setup preprocessing steps into a separate executable, like Alex ## 1.19.11 * Fix for building with GHC 8.6.x ## 1.19.10 * Fix polymorphic (rank-n) non-terminals * Fix for GHC 8.8.1 ## 1.19.9 * Fix cabal warnings * Bump upper bounds * Fix build with GHC 8.4.1-alpha ## 1.19.8 * Fix issue #94 (some grammars don't compile due to new type signatures introduced to allow overloading to be used) ## 1.19.7 * Fix missing test suite files in the sdist ## 1.19.6 * Manually generate Parser.hs using Makefile before sdist, to fix bootstrapping problems with cabal sandboxes & new-build * Documentation fixes * Fixed GLR support * New option `-p`/`--pretty` prints the grammar rules (only) to a file * Added generation of additional type signatures to enable use of typeclasses in monadic parsers. ## 1.19.5 * Fixes for GHC 7.10 * Code cleanups (thanks Index Int ) ## 1.19.4 * Fix for GHC 7.10 (Applicative/Monad, #19, #21) ## 1.19.3 * Fix for GHC 7.2 (#16) ## 1.19.2 * Fixes for clang (XCode 5) ## 1.19.1 * Repackaged to build with GHC 7.7+ ## 1.19 * Necessary changes to work with GHC 7.8 ## 1.18.10 * Fix build with GHC 7.6 ## 1.18.8 * Fix a packaging bug (cabal-install-0.10.2 didn't put the Happy-generated files in the sdist) ## 1.18.7 * Fix a bug in error handling when using `%monad` without `%lexer` ## 1.18.5 --- 17 Jun 2010 ## 1.18.4 --- 23 April 2009 ## 1.18.2 --- 5 November 2008 ## 1.18.1 --- 14 October 2008 ## 1.18 --- 13 October 2008 * New feature: EBNF-style paramterized macros, thanks to Iavor Diatchki. * Works with Cabal 1.2, 1.4 and 1.6 * A few minor bugfixes ## 1.17 --- 22 October 2007 * Cabal 1.2 is required * Works with upcoming GHC 6.8.1 * Fix the `parE` bug (poor error message for errors in the grammar) * Some performance improvements to Happy itself ## 1.16 --- 8 January 2007 * Switch to a Cabal build system: you need a recent version of Cabal (1.1.6 or later). If you have GHC 6.4.2, then you need to upgrade Cabal before building Happy. GHC 6.6 is fine. * New `%error` directive * New production forms: `{%% .. }` and `{%^ .. }` * Added Attribute Grammar support, by Robert Dockins ## 1.15 --- 14 January 2005 * New `%expect` directive * The list of tokens passed to happyError now includes the current token (not `%lexer`). * Added support for ambiguous grammars via Generalized LR parsing * Added `%partial` to indicate a parser that can return a result before EOF is reached. ## 1.14 --- 14 April 2004 * New meta-variable `$>` represents the rightmost token. * Happy's OPTIONS pragma is merged with an existing one in the grammar file, if any. ## 1.13 --- 19 June 2002 * Support for newer versions of GHC (>= 5.04). * Addition of an experimental flag: `--strict`. ## 1.11 --- 25 September 2001 * Tokens no longer have a default precedence --- if you want a token to have a precedence, you have to declare it. * Bugfix to templates for GHC on 64-bit platforms. ## 1.10 * Bugfixes, and minor performance improvements, * Most of the examples work again. ## 1.9 * A grammar may now contain several entry points, allowing several parsers to share parts of the grammar. * Some bugfixes. ## 1.8 * Parser table compression, and more efficient table encoding when used with GHC. Large grammars can now be compiled in much less time/space than before using GHC. * Yacc-style operator precedence, thanks to patches from Hermann Oliveira Rodrigues and Josef Svenningsson . * A debug option which causes the generated parser to print tracing information at each step during parsing. ## 1.6 * Now written in, and generates, Haskell 98. * Several bug fixes. * A new option, `-c`, generates parsers that use GHC's `unsafeCoerce#` primitive to speed up parsing and cut down the binary size. The `-c` option can only be used with the -g (GHC extensions) option. * Parsers generated with the -g option will compile to smaller binaries now --- some sources of parser-bloat were identified and squished. * Happy has a new Open Source license, based on the BSD license. * A sample Haskell parser using Happy is included. ## 1.5 * Many bug fixes to the error recovery support, found by experimenting with the Haskell grammar and layout. * Happy is about 5 times faster on large examples, due to some changes in the LALR(1) algorithms. As of version 1.5, Happy is capable of parsing full Haskell. We have a Haskell parser that uses Happy, which will shortly be part of the library collection distributed with GHC. ## 1.2 * Supports Haskell 1.4 * Lots of bugs fixed * Performance: the parser generator is at least 20% faster, and generated parsers should be faster due to the replacement of a data type with a newtype. * Simple error recovery: designed to be enough to implement the Haskell layout rule. * Revamped monad support: the monad can now be threaded through the lexer, enabling passing of state between the parser and the lexer (handy for the Haskell layout rule), and consistent error handling. * The `%newline` feature is removed, the same effect can be achieved using the new monad support. ## 0.9 * Happy should be much faster than before. * Generated parsers will be 5-10% smaller. * Happy now compiles with ghc-0.26. * Support for monadic parsers via `%monad` (see the documentation). * New syntax: previously ```haskell f :: { } f : ... | ... etc. ``` can now be written ```haskell f :: { } : ... | ... etc. ``` (i.e. omit the extra `f`. It was always ignored anyway :-) * Miscellaneous bug fixes. happy-lib-2.1.7/README.md0000644000000000000000000000160007346545000013053 0ustar0000000000000000# Happy [![Build Status](https://github.com/haskell/happy/actions/workflows/haskell-ci.yml/badge.svg)](https://github.com/haskell/happy/actions) Happy is a parser generator for Haskell 98 (and later). Happy Parsing! ## Documentation Documentation is hosted on [Read the Docs](https://haskell-happy.readthedocs.io): - [Online (HTML)](https://haskell-happy.readthedocs.io) - [PDF](https://haskell-happy.readthedocs.io/_/downloads/en/latest/pdf/) - [Downloadable HTML](https://haskell-happy.readthedocs.io/_/downloads/en/latest/htmlzip/) For basic information of the sort typically found in a read-me, see the following sections of the docs: - [Introduction](https://haskell-happy.readthedocs.io/en/latest/introduction.html) - [Obtaining Happy](https://haskell-happy.readthedocs.io/en/latest/obtaining.html) - [Contributing](https://haskell-happy.readthedocs.io/en/latest/contributing.html) happy-lib-2.1.7/backend-glr/src/Happy/Backend/0000755000000000000000000000000007346545000017147 5ustar0000000000000000happy-lib-2.1.7/backend-glr/src/Happy/Backend/GLR.hs0000644000000000000000000000016207346545000020126 0ustar0000000000000000module Happy.Backend.GLR where import Happy.Paths glrBackendDataDir :: IO String glrBackendDataDir = getDataDir happy-lib-2.1.7/backend-glr/src/Happy/Backend/GLR/0000755000000000000000000000000007346545000017573 5ustar0000000000000000happy-lib-2.1.7/backend-glr/src/Happy/Backend/GLR/ProduceCode.lhs0000644000000000000000000006562707346545000022517 0ustar0000000000000000Module for producing GLR (Tomita) parsing code. This module is designed as an extension to the Haskell parser generator Happy. (c) University of Durham, Ben Medlock 2001 -- initial code, for structure parsing (c) University of Durham, Paul Callaghan 2004 -- extension to semantic rules, and various optimisations %----------------------------------------------------------------------------- > module Happy.Backend.GLR.ProduceCode > ( produceGLRParser > , baseTemplate > , libTemplate > , DecodeOption(..) > , FilterOption(..) > , GhcExts(..) > , Options > ) where > import Happy.Paths ( version ) > import Happy.Grammar > import Happy.Grammar.ExpressionWithHole ( substExpressionWithHole ) > import Happy.Tabular.LALR > import Data.Array ( Array, (!), array, assocs ) > import Data.Char ( isSpace, isAlphaNum ) > import Data.List ( nub, (\\), sort, find, tails ) > import Data.Version ( showVersion ) %----------------------------------------------------------------------------- File and Function Names > baseTemplate, libTemplate :: String -> String > baseTemplate td = td ++ "/GLR_Base.hs" -- NB Happy uses / too > libTemplate td = td ++ "/GLR_Lib.hs" -- Windows accepts this? --- prefix for production names, to avoid name clashes > prefix :: String > prefix = "G_" %----------------------------------------------------------------------------- This type represents choice of decoding style for the result > data DecodeOption > = TreeDecode > | LabelDecode --- This type represents whether filtering done or not > data FilterOption > = NoFiltering > | UseFiltering --- This type represents whether GHC extensions are used or not - extra values are imports and ghc options reqd > data GhcExts > = NoGhcExts > | UseGhcExts String -- imports > [String] -- language extensions --- this is where the exts matter > show_st :: GhcExts -> {-State-}Int -> String > show_st UseGhcExts{} = (++"#") . show > show_st NoGhcExts = show --- > type DebugMode = Bool > type Options = (DecodeOption, FilterOption, GhcExts) %----------------------------------------------------------------------------- "produceGLRParser" generates the files containing the Tomita parsing code. It produces two files - one for the data (small template), and one for the driver and data strs (large template). > produceGLRParser > :: (String -- Base Template > ,String) -- Lib template > -> String -- Root of Output file name > -> (ActionTable > ,GotoTable) -- LR tables > -> String -- Start parse function name > -> Maybe String -- Module header > -> Maybe String -- User-defined stuff (token DT, lexer etc.) > -> (DebugMode,Options) -- selecting code-gen style > -> Grammar String -- Happy Grammar > -> Directives -- Directives in the .y-file > -> (String -- data > ,String) -- parser > > produceGLRParser (base, lib) basename tables start header trailer (debug,options) g pragmas > = ( content base $ "" > , lib_content lib > ) > where > (imps, lang_exts) = case ghcExts_opt of > UseGhcExts is os -> (is, os) > _ -> ("", []) > > defines = concat > [ [ "HAPPY_DEBUG" | debug ] > , [ "HAPPY_GHC" | UseGhcExts _ _ <- return ghcExts_opt ] > ] > (_,_,ghcExts_opt) = options Extract the module name from the given module declaration, if it exists. > m_mod_decl = find isModKW . zip [0..] . tails . (' ':) =<< header > isModKW (_, c0:'m':'o':'d':'u':'l':'e':c1:_) = not (validIDChar c0 || validIDChar c1) > isModKW _ = False > validIDChar c = isAlphaNum c || c `elem` "_'" > validModNameChar c = validIDChar c || c == '.' > data_mod = mod_name ++ "Data" > mod_name = case m_mod_decl of > Just (_, md) -> takeWhile validModNameChar (dropWhile (not . validModNameChar) (drop 8 md)) Or use a default based upon the filename (original behaviour). > Nothing -> reverse . takeWhile (`notElem` "\\/") $ reverse basename Remove the module declaration from the header so that the remainder of the header can be used in the generated code. > header_sans_mod = flip (maybe header) m_mod_decl $ \ (mi, _) -> do > hdr <- header Extract the string that comes before the module declaration... > let (before, mod_decl) = splitAt mi hdr > let isWhereKW (c0:'w':'h':'e':'r':'e':c1:_) = not (validIDChar c0 || validIDChar c1) > isWhereKW _ = False > let where_after = dropWhile (not . isWhereKW) . tails . (++ "\n") $ mod_decl > let after = drop 6 . concat . take 1 $ where_after ...and combine it with the string that comes after the 'where' keyword. > return $ before ++ "\n" ++ after > (sem_def, sem_info) = mkGSemType options g pragmas > table_text = mkTbls tables sem_info (ghcExts_opt) g > header_parts = fmap (span (\x -> take 3 (dropWhile isSpace x) == "{-#") > . lines) > header_sans_mod > -- Split off initial options, if they are present > -- Assume these options ONLY related to code which is in > -- parser tail or in sem. rules > content base_defs > = str (unlines > [ "{-# LANGUAGE " ++ l ++ " #-}\n" | l <- lang_exts ]) > . str (unlines $ maybe [] fst header_parts) .nl > . nl > . str (comment "data") .nl .nl > . str ("module " ++ data_mod ++ " where") .nl > . nl > . maybestr (fmap (unlines.snd) header_parts) .nl > . nl > . str base_defs .nl > . nl > . let count_nls = length . filter (=='\n') > pre_trailer = maybe 0 count_nls header_sans_mod -- check fmt below > + count_nls base_defs > + 10 -- for the other stuff > post_trailer = pre_trailer + maybe 0 count_nls trailer + 4 > in > str ("{-# LINE " ++ show pre_trailer ++ " " > ++ show (basename ++ "Data.hs") ++ "#-}") > -- This should show a location in basename.y -- but Happy > -- doesn't pass this info through. But we still avoid being > -- told a location in GLR_Base! > . nl > . nl > . maybestr trailer > .nl > .nl > . str ("{-# LINE " ++ show post_trailer ++ " " > ++ show (basename ++ "Data.hs") ++ "#-}") > . nl > . nl > . mkGSymbols g pragmas .nl > . nl > . sem_def .nl > . nl > . mkSemObjects options (monad_sub pragmas) sem_info .nl > . nl > . mkDecodeUtils options (monad_sub pragmas) sem_info .nl > . nl > . user_def_token_code (token_type pragmas) .nl > . nl > . table_text > lib_content lib_text > = let (pre,_drop_me : post) = break (== "fakeimport DATA") $ lines lib_text > in > unlines [ "{-# LANGUAGE CPP #-}" > , unlines > [ "#define " ++ d ++ " 1" | d <- defines ] > , unlines > [ "{-# LANGUAGE " ++ l ++ " #-}\n" | l <- lang_exts ] > , comment "driver" ++ "\n" > , "module " ++ mod_name ++ "(" > , case lexer pragmas of > Nothing -> "" > Just (lf,_) -> " " ++ lf ++ "," > , " " ++ start > , "" > , unlines pre > , imps > , "import " ++ data_mod > , start ++ " = glr_parse " > , "use_filtering = " ++ show use_filtering > , "top_symbol = " ++ prefix ++ start_prod > , unlines post > ] > start_prod = token_names g ! (let (_,_,i,_) = head $ starts g in i) > use_filtering = case options of (_, UseFiltering,_) -> True > _ -> False > comment :: String -> String > comment which > = "-- parser (" ++ which ++ ") produced by Happy (GLR) Version " ++ > showVersion version > user_def_token_code :: String -> String -> String > user_def_token_code tokenType > = str "type UserDefTok = " . str tokenType . nl > . str "instance TreeDecode " . brack tokenType . str " where" . nl > . str " decode_b f (Branch (SemTok t) []) = [happy_return t]" . nl > . str "instance LabelDecode " . brack tokenType . str " where" . nl > . str " unpack (SemTok t) = t" . nl %----------------------------------------------------------------------------- Formats the tables as code. > mkTbls :: (ActionTable -- Action table from Happy > ,GotoTable) -- Goto table from Happy > -> SemInfo -- info about production mapping > -> GhcExts -- Use unboxed values? > -> Grammar String -- Happy Grammar > -> ShowS > > mkTbls (action,goto) sem_info exts g > = let gsMap = mkGSymMap g > semfn_map = mk_semfn_map sem_info > in > writeActionTbl action gsMap (semfn_map !) exts g > . writeGotoTbl goto gsMap exts %----------------------------------------------------------------------------- Create a mapping of Happy grammar symbol integers to the data representation that will be used for them in the GLR parser. > mkGSymMap :: Grammar String -> [(Name,String)] > mkGSymMap g > = [ -- (errorTok, prefix ++ "Error") > ] > ++ [ (i, prefix ++ (token_names g) ! i) > | i <- user_non_terminals g ] -- Non-terminals > ++ [ (i, "HappyTok (" ++ mkMatch tok ++ ")") > | (i,tok) <- token_specs g ] -- Tokens (terminals) > ++ [(eof_term g,"HappyEOF")] -- EOF symbol (internal terminal) > where > mkMatch tok = case tok of > TokenFixed t -> t > TokenWithValue e -> substExpressionWithHole e "_" > toGSym :: [(Name, String)] -> Name -> String > toGSym gsMap i > = case lookup i gsMap of > Nothing -> error $ "No representation for symbol " ++ show i > Just g -> g %----------------------------------------------------------------------------- Take the ActionTable from Happy and turn it into a String representing a function that can be included as the action table in the GLR parser. It also shares identical reduction values as CAFs > writeActionTbl > :: ActionTable -> [(Name,String)] -> (Name->String) > -> GhcExts -> Grammar String -> ShowS > writeActionTbl acTbl gsMap semfn_map exts g > = interleave "\n" > $ map str > $ mkLines ++ [errorLine] ++ mkReductions > where > name = "action" > mkLines = concatMap (mkState) (assocs acTbl) > errorLine = name ++ " _ _ = Error" > mkState (i,arr) > = filter (/="") $ map (mkLine i) (assocs arr) > > mkLine state (symInt,action) > | symInt == errorTok -- skip error productions > = "" -- NB see ProduceCode's handling of these > | symInt == catchTok -- skip error productions > = "" -- NB see ProduceCode's handling of these > | otherwise > = case action of > LR'Fail -> "" > LR'MustFail -> "" > _ -> unwords [ startLine , mkAct action ] > where > startLine > = unwords [ name , show_st exts state, "(" , getTok , ") =" ] > getTok = toGSym gsMap symInt > mkAct act > = case act of > LR'Shift newSt _ -> "Shift " ++ show newSt ++ " []" > LR'Reduce r _ -> "Reduce " ++ "[" ++ mkRed r ++ "]" > LR'Accept -> "Accept" > LR'Multiple rs (LR'Shift st _) > -> "Shift " ++ show st ++ " " ++ mkReds rs > LR'Multiple rs r@(LR'Reduce{}) > -> "Reduce " ++ mkReds (r:rs) > _ -> error "writeActionTbl/mkAct: Unhandled case" > where > mkReds rs = "[" ++ tail (concat [ "," ++ mkRed r | LR'Reduce r _ <- rs ]) ++ "]" > mkRed r = "red_" ++ show r > mkReductions = [ mkRedDefn p > | p@(_, Production n _ _ _) <- zip [MkName 0 ..] $ productions g > , n `notElem` start_productions g ] > mkRedDefn (r, Production lhs_id rhs_ids (_code,_dollar_vars) _) > = mkRed r ++ " = ("++ lhs ++ "," ++ show arity ++ " :: Int," ++ sem ++")" > where > lhs = toGSym gsMap $ lhs_id > arity = length rhs_ids > sem = semfn_map r %----------------------------------------------------------------------------- Do the same with the Happy goto table. > writeGotoTbl :: GotoTable -> [(Name,String)] -> GhcExts -> ShowS > writeGotoTbl goTbl gsMap exts > = interleave "\n" (map str $ filter (not.null) mkLines) > . str errorLine . nl > where > name = "goto" > errorLine = "goto _ _ = " ++ show_st exts (negate 1) > mkLines = map mkState (assocs goTbl) > > mkState (i,arr) > = unlines $ filter (/="") $ map (mkLine i) (assocs arr) > > mkLine state (ntInt,goto) > = case goto of > NoGoto -> "" > Goto st -> unwords [ startLine , show_st exts st ] > where > startLine > = unwords [ name , show_st exts state, getGSym , "=" ] > getGSym = toGSym gsMap ntInt %----------------------------------------------------------------------------- Create the 'GSymbol' ADT for the symbols in the grammar > mkGSymbols :: Grammar String -> Directives -> ShowS > mkGSymbols g pragmas > = str dec > . str eof > . str tok > . interleave "\n" [ str " | " . str prefix . str sym . str " " > | sym <- syms ] > . str der > -- ++ eq_inst > -- ++ ord_inst > where > dec = "data GSymbol" > eof = " = HappyEOF" > tok = " | HappyTok {-!Int-} (" ++ token_type pragmas ++ ")" > der = " deriving (Show,Eq,Ord)" > syms = [ token_names g ! i | i <- user_non_terminals g ] NOTES: Was considering avoiding use of Eq/Ord over tokens, but this then means hand-coding the Eq/Ord classes since we're over-riding the usual order except in one case. maybe possible to form a union and do some juggling, but this isn't that easy, eg input type of "action". plus, issues about how token info gets into TreeDecode sem values - which might be tricky to arrange. <> eq_inst = "instance Eq GSymbol where" <> : " HappyTok i _ == HappyTok j _ = i == j" <> : [ " i == j = fromEnum i == fromEnum j" %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Semantic actions on rules. These are stored in a union type "GSem", and the semantic values are held on the branches created at the appropriate reduction. "GSem" type has one constructor per distinct type of semantic action and pattern of child usage. %----------------------------------------------------------------------------- Creating a type for storing semantic rules - also collects information on code structure and constructor names, for use in later stages. > type SemInfo > = [(String, String, [Int], [((Int, Int), ([(Int, TokenSpec)], String), [Int])])] > mkGSemType :: Options -> Grammar String -> Directives -> (ShowS, SemInfo) > mkGSemType (TreeDecode,_,_) g pragmas > = (def, map snd syms) > where > mtype s = case monad_sub pragmas of > Nothing -> s > Just (ty,_,_) -> ty ++ ' ' : brack s "" > def = str "data GSem" . nl > . str " = NoSem" . nl > . str (" | SemTok (" ++ token_type pragmas ++ ")") . nl > . interleave "\n" [ str " | " . str sym . str " " > | sym <- map fst syms ] > . str "instance Show GSem where" . nl > . interleave "\n" [ str " show " . str c . str "{} = " . str (show c) > | (_,c,_,_) <- map snd syms ] > syms = [ (c_name ++ " (" ++ ty ++ ")", (rty, c_name, mask, prod_info)) > | (i,this@(mask,args,rty)) <- zip [0..] (nub $ map fst info) > -- find unique types (plus mask) > , let c_name = "Sem_" ++ show i > , let mrty = mtype rty > , let ty = foldr (\l r -> l ++ " -> " ++ r) mrty args > , let code_info = [ j_code | (that, j_code) <- info, this == that ] > , let prod_info = [ ((i,k), code, js) > | (k,code) <- zip [0..] (nub $ map snd code_info) > , let js = [ j | (j,code2) <- code_info > , code == code2 ] > ] > -- collect specific info about productions with this type > ] > info = [ ((var_mask, args, i_ty), (j,(ts_pats,code))) > | i <- user_non_terminals g > , let i_ty = typeOf i > , j <- lookupProdsOfName g i -- all prod numbers > , let Production _ ts (raw_code,dollar_vars) _ = lookupProdNo g j > , let var_mask = map (\x -> x - 1) vars_used > where vars_used = sort $ nub dollar_vars > , let args = [ typeOf $ ts !! v | v <- var_mask ] > , let code | all isSpace raw_code = "()" > | otherwise = raw_code > , let ts_pats = [ (k+1,c) | k <- var_mask > , (t,c) <- token_specs g > , ts !! k == t ] > ] > typeOf n | n `elem` terminals g = token_type pragmas > | otherwise = case types g ! n of > Nothing -> "()" -- default > Just t -> t > -- NB expects that such labels are Showable > mkGSemType (LabelDecode,_,_) g pragmas > = (def, map snd syms) > where > def = str "data GSem" . nl > . str " = NoSem" . nl > . str (" | SemTok (" ++ token_type pragmas ++ ")") > . interleave "\n" [ str " | " . str sym . str " " > | sym <- map fst syms ] > . str " deriving (Show)" . nl > syms = [ (c_name ++ " (" ++ ty ++ ")", (ty, c_name, mask, prod_info)) > | (i,this@(mask,ty)) <- zip [0..] (nub $ map fst info) > -- find unique types > , let c_name = "Sem_" ++ show i > , let code_info = [ j_code | (that, j_code) <- info, this == that ] > , let prod_info = [ ((i,k), code, js) > | (k,code) <- zip [0..] (nub $ map snd code_info) > , let js = [ j | (j,code2) <- code_info > , code == code2 ] > ] > -- collect specific info about productions with this type > ] > info = [ ((var_mask,i_ty), (j,(ts_pats,code))) > | i <- user_non_terminals g > , let i_ty = typeOf i > , j <- lookupProdsOfName g i -- all prod numbers > , let Production _ ts (code,dollar_vars) _ = lookupProdNo g j > , let var_mask = map (\x -> x - 1) vars_used > where vars_used = sort $ nub dollar_vars > , let ts_pats = [ (k+1,c) | k <- var_mask > , (t,c) <- token_specs g > , ts !! k == t ] > ] > typeOf n = case types g ! n of > Nothing -> "()" -- default > Just t -> t %--------------------------------------- Creates the appropriate semantic values. - for label-decode, these are the code, but abstracted over the child indices - for tree-decode, these are the code abstracted over the children's values > mkSemObjects :: Options -> MonadInfo -> SemInfo -> ShowS > mkSemObjects (LabelDecode,filter_opt,_) _ sem_info > = interleave "\n" > $ [ str (mkSemFn_Name ij) > . str (" ns@(" ++ pat ++ "happy_rest) = ") > . str (" Branch (" ++ c_name ++ " (" ++ code ++ ")) ") > . str (nodes filter_opt) > | (_ty, c_name, mask, prod_info) <- sem_info > , (ij, (pats,code), _ps) <- prod_info > , let pat | null mask = "" > | otherwise = concatMap (\v -> mk_tok_binder pats (v+1) ++ ":") > [0..maximum mask] > , let nodes NoFiltering = "ns" > nodes UseFiltering = "(" ++ foldr (\l -> mkHappyVar (l+1) . showChar ':') "[])" mask > ] > where > mk_tok_binder pats v > = mk_binder (\s -> "(_,_,HappyTok (" ++ s ++ "))") pats v "" > mkSemObjects (TreeDecode,filter_opt,_) monad_info sem_info > = interleave "\n" > $ [ str (mkSemFn_Name ij) > . str (" ns@(" ++ pat ++ "happy_rest) = ") > . str (" Branch (" ++ c_name ++ " (" ++ sem ++ ")) ") > . str (nodes filter_opt) > | (_ty, c_name, mask, prod_info) <- sem_info > , (ij, (pats,code), _) <- prod_info > , let indent c = init $ unlines $ map (replicate 4 ' '++) $ lines c > , let mcode = case monad_info of > Nothing -> code > Just (_,_,rtn) -> case code of > '%':code' -> "\n" ++ indent code' > _ -> rtn ++ " (" ++ code ++ ")" > , let sem = foldr (\v t -> mk_lambda pats (v + 1) "" ++ t) mcode mask > , let pat | null mask = "" > | otherwise = concatMap (\v -> mkHappyVar (v+1) ":") > [0..maximum mask] > , let nodes NoFiltering = "ns" > nodes UseFiltering = "(" ++ foldr (\l -> mkHappyVar (l+1) . showChar ':') "[])" mask > ] > mk_lambda :: [(Int, TokenSpec)] -> Int -> String -> String > mk_lambda pats v > = (\s -> "\\" ++ s ++ " -> ") . mk_binder id pats v > mk_binder :: (String -> String) -> [(Int, TokenSpec)] -> Int -> String -> String > mk_binder wrap pats v > = case lookup v pats of > Nothing -> mkHappyVar v > Just p -> case p of > TokenFixed p' -> wrap . mkHappyVar v . showChar '@' . brack p' > TokenWithValue e -> wrap . brack' (substExpressionWithHole e . mkHappyVar v) --- standardise the naming scheme > mkSemFn_Name :: (Int, Int) -> String > mkSemFn_Name (i,j) = "semfn_" ++ show i ++ "_" ++ show j --- maps production name to the underlying (possibly shared) semantic function > mk_semfn_map :: SemInfo -> Array Name String > mk_semfn_map sem_info > = array (i_min, i_max) prod_map > where > i_min = MkName 0 > i_max = MkName $ maximum $ map (getName . fst) prod_map > prod_map = [ (MkName p, mkSemFn_Name ij) > | (_,_,_,pi') <- sem_info, (ij,_,ps) <- pi', p <- ps ] %----------------------------------------------------------------------------- Create default decoding functions Idea is that sem rules are stored as functions in the AbsSyn names, and only unpacked when needed. Using classes here to manage the unpacking. > mkDecodeUtils :: Options -> MonadInfo -> SemInfo -> ShowS > mkDecodeUtils (TreeDecode,filter_opt,_) monad_info seminfo > = interleave "\n" > $ map str (monad_defs monad_info) > ++ map mk_inst ty_cs > where > ty_cs = [ (ty, [ (c_name, mask) > | (ty2, c_name, mask, _j_vs) <- seminfo > , ty2 == ty > ]) > | ty <- nub [ ty | (ty,_,_,_) <- seminfo ] > ] -- group by same type > mk_inst (ty, cs_vs) > = str ("instance TreeDecode (" ++ ty ++ ") where ") . nl > . interleave "\n" > [ str " " > . str ("decode_b f (Branch (" ++ c_name ++ " s)") > . str (" (" ++ var_pat ++ ")) = ") > . cross_prod monad_info "s" (nodes filter_opt) > | (c_name, vs) <- cs_vs > , let vars = [ "b_" ++ show n | n <- var_range filter_opt vs ] > , let var_pat = foldr (\l r -> l ++ ":" ++ r) "_" vars > , let nodes NoFiltering = [ vars !! n | n <- vs ] > nodes UseFiltering = vars > ] > var_range _ [] = [] > var_range NoFiltering vs = [0 .. maximum vs ] > var_range UseFiltering vs = [0 .. length vs - 1] > cross_prod Nothing s_var nodes > = cross_prod_ (char '[' . str s_var . char ']') > (map str nodes) > cross_prod (Just (_,_,rtn)) s_var nodes > = str "map happy_join $ " > . cross_prod_ (char '[' . str rtn . char ' ' . str s_var . char ']') > (map str nodes) > cross_prod_ = foldl (\s a -> brack' > $ str "cross_fn" > . char ' ' . s > . str " $ decode f " > . a) > mkDecodeUtils (LabelDecode,_,_) monad_info seminfo > = interleave "\n" > $ map str > $ monad_defs monad_info ++ concatMap (mk_inst) ty_cs > where > ty_cs = [ (ty, [ (c_name, mask) > | (ty2, c_name, mask, _) <- seminfo > , ty2 == ty > ]) > | ty <- nub [ ty | (ty,_,_,_) <- seminfo ] > ] -- group by same type > mk_inst (ty, cns) > = ("instance LabelDecode (" ++ ty ++ ") where ") > : [ " unpack (" ++ c_name ++ " s) = s" > | (c_name, _mask) <- cns ] --- This selects the info used for monadic parser generation > type MonadInfo = Maybe (String,String,String) > monad_sub :: Directives -> MonadInfo > monad_sub pragmas > = case monad pragmas of > (True, _, ty,bd,ret) -> Just (ty,bd,ret) > _ -> Nothing > -- TMP: only use monad info if it was user-declared, and ignore ctxt > -- TMP: otherwise default to non-monadic code > -- TMP: (NB not sure of consequences of monads-everywhere yet) --- form the various monad-related defs. > monad_defs :: MonadInfo -> [String] > monad_defs Nothing > = [ "type Decode_Result a = a" > , "happy_ap = ($)" > , "happy_return = id"] > monad_defs (Just (ty,tn,rtn)) > = [ "happy_join x = (" ++ tn ++ ") x id" > , "happy_ap f a = (" ++ tn ++ ") f (\\f -> (" ++ tn ++ ") a (\\a -> " ++ rtn ++ "(f a)))" > , "type Decode_Result a = " ++ brack ty " a" > , "happy_return = " ++ rtn ++ " :: a -> Decode_Result a" > ] %----------------------------------------------------------------------------- Util Functions --- remove Happy-generated start symbols. > user_non_terminals :: Grammar String -> [Name] > user_non_terminals g > = non_terminals g \\ start_productions g > start_productions :: Grammar String -> [Name] > start_productions g = [ s | (_,s,_,_) <- starts g ] --- > mkHappyVar :: Int -> String -> String > mkHappyVar n = str "happy_var_" . shows n %------------------------------------------------------------------------------ Fast string-building functions > str :: String -> String -> String > str = showString > char :: Char -> String -> String > char c = (c :) > interleave :: String -> [String -> String] -> String -> String > interleave s = foldr (\a b -> a . str s . b) id > nl :: String -> String > nl = char '\n' > maybestr :: Maybe String -> String -> String > maybestr (Just s) = str s > maybestr _ = id > brack :: String -> String -> String > brack s = str ('(' : s) . char ')' > brack' :: (String -> String) -> String -> String > brack' s = char '(' . s . char ')' happy-lib-2.1.7/backend-lalr/src/Happy/Backend/0000755000000000000000000000000007346545000017315 5ustar0000000000000000happy-lib-2.1.7/backend-lalr/src/Happy/Backend/LALR.hs0000644000000000000000000000506307346545000020407 0ustar0000000000000000module Happy.Backend.LALR where import Happy.Paths import Data.Char lalrBackendDataDir :: IO String lalrBackendDataDir = getDataDir magicFilter :: Maybe String -> String -> String magicFilter magicName = case magicName of Nothing -> id Just name' -> let small_name = name' big_name = toUpper (head name') : tail name' filter_output ('h':'a':'p':'p':'y':rest) = small_name ++ filter_output rest filter_output ('H':'a':'p':'p':'y':rest) = big_name ++ filter_output rest filter_output (c:cs) = c : filter_output cs filter_output [] = [] in filter_output importsToInject :: Bool -> String importsToInject debug = concat ["\n", import_prelude, import_array, import_bits, import_glaexts, debug_imports, applicative_imports] where debug_imports | debug = import_debug | otherwise = "" applicative_imports = import_applicative import_glaexts = "import qualified GHC.Exts as Happy_GHC_Exts\n" _import_ghcstack = "import qualified GHC.Stack as Happy_GHC_Stack\n" import_array = "import qualified Data.Array as Happy_Data_Array\n" import_bits = "import qualified Data.Bits as Bits\n" import_debug = "import qualified System.IO as Happy_System_IO\n" ++ "import qualified System.IO.Unsafe as Happy_System_IO_Unsafe\n" ++ "import qualified Debug.Trace as Happy_Debug_Trace\n" import_applicative = "import Control.Applicative(Applicative(..))\n" ++ "import Control.Monad (ap)\n" import_prelude = unlines $ map (\ x -> unwords ["import qualified", x, "as Happy_Prelude"]) $ -- Keep this list alphabetically ordered! -- The style of list notation here has been chosen so that these lines can be sorted mechanically, -- e.g. in Emacs with M-x sort-lines. "Control.Monad" : "Data.Bool" : "Data.Function" : "Data.Int" : "Data.List" : "Data.Maybe" : "Data.String" : "Data.Tuple" : "GHC.Err" : "GHC.Num" : "Text.Show" : [] langExtsToInject :: [String] langExtsToInject = ["MagicHash", "BangPatterns", "TypeSynonymInstances", "FlexibleInstances", "PatternGuards", "NoStrictData", "UnboxedTuples", "PartialTypeSignatures"] defines :: Bool -> Bool -> String defines debug coerce = unlines [ "#define " ++ d ++ " 1" | d <- vars_to_define ] where vars_to_define = concat [ [ "HAPPY_DEBUG" | debug ] , [ "HAPPY_COERCE" | coerce ] ] happy-lib-2.1.7/backend-lalr/src/Happy/Backend/LALR/0000755000000000000000000000000007346545000020047 5ustar0000000000000000happy-lib-2.1.7/backend-lalr/src/Happy/Backend/LALR/ProduceCode.lhs0000644000000000000000000014747607346545000022776 0ustar0000000000000000----------------------------------------------------------------------------- The code generator. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Happy.Backend.LALR.ProduceCode (produceParser) where > import Happy.Paths ( version ) > import Data.Version ( showVersion ) > import Happy.Grammar > import Happy.Grammar.ExpressionWithHole ( substExpressionWithHole ) > import Happy.Tabular.LALR > import Data.Maybe ( isNothing, fromMaybe ) > import Data.Char ( ord, chr ) > import Data.List ( sortBy, nub ) > import Control.Monad.ST ( ST, runST ) > import Data.Word > import Data.Int > import Data.Bits > import Data.Array.ST ( STUArray ) > import Data.Array.Unboxed ( UArray ) > import Data.Array.MArray ( MArray(..), freeze, readArray, writeArray ) > import Data.Array.IArray ( Array, IArray(..), (!), array, assocs, elems ) %----------------------------------------------------------------------------- Produce the complete output file. > produceParser :: Grammar String -- grammar info > -> Maybe AttributeGrammarExtras > -> Directives -- directives supplied in the .y-file > -> ActionTable -- action table > -> GotoTable -- goto table > -> [String] -- language extensions > -> Maybe String -- module header > -> Maybe String -- module trailer > -> Bool -- use coercions > -> Bool -- strict parser > -> String > produceParser (Grammar > { productions = prods > , non_terminals = nonterms > , terminals = terms > , types = nt_types > , first_nonterm = first_nonterm' > , eof_term = eof > , first_term = fst_term > , token_names = token_names' > , token_specs = token_rep > , starts = starts' > }) > mAg > (Directives > { lexer = lexer' > , imported_identity = imported_identity' > , monad = (use_monad,monad_context,monad_tycon,monad_then,monad_return) > , token_type = token_type' > , error_handler = error_handler' > , error_expected = error_expected' > }) > action goto lang_exts module_header module_trailer > coerce strict > = ( top_opts > . maybestr module_header . nl > . str comment > -- comment goes *after* the module header, so that we > -- don't screw up any OPTIONS pragmas in the header. > . produceAbsSynDecl . nl > . produceTokToStringList > . produceActionTable > . produceReductions > . produceTokenConverter . nl > . produceIdentityStuff > . produceMonadStuff > . produceEntries > . produceStrict strict > . (case mAg of > Nothing -> id > Just ag -> produceAttributes ag) > . nl > . maybestr module_trailer . nl > ) "" > where > n_starts = length starts' > token = brack token_type' > > nowarn_opts = str "{-# OPTIONS_GHC -w #-}" . nl > -- XXX Happy-generated code is full of warnings. Some are easy to > -- fix, others not so easy, and others would require GHC version > -- #ifdefs. For now I'm just disabling all of them. We used to emit tabs for indentation, but since 2.0.0.1 we use 8 spaces for back-compat (#303): > indentStr = " " > indent = str indentStr > intMaybeHash = str "Happy_GHC_Exts.Int#" > -- Parsing monad and its constraints > pty = str monad_tycon -- str "P" > ptyAt a = brack' (pty . str " " . a) -- \(str "a") -> str "(P a)" > pcont = str monad_context -- str "Read a", some constraint for "P" to be a monad > n_missing_types = length (filter isNothing (elems nt_types)) > happyAbsSyn = str "(HappyAbsSyn " . str wild_tyvars . str ")" > where wild_tyvars = unwords (replicate n_missing_types "_") > > top_opts = > nowarn_opts > . (str $ unlines > [ unwords [ "{-# LANGUAGE", l, "#-}" ] | l <- lang_exts ]) %----------------------------------------------------------------------------- Make the abstract syntax type declaration, of the form: data HappyAbsSyn a t1 .. tn = HappyTerminal a | HappyAbsSyn1 t1 ... | HappyAbsSynn tn > produceAbsSynDecl If we're using coercions, we need to generate the injections etc. data HappyAbsSyn ti tj tk ... = HappyAbsSyn (where ti, tj, tk are type variables for the non-terminals which don't have type signatures). newtype HappyWrap = HappyWrap ti happyIn :: ti -> HappyAbsSyn ti tj tk ... happyIn x = unsafeCoerce# (HappyWrap x) {-# INLINE happyIn #-} happyOut :: HappyAbsSyn ti tj tk ... -> tn happyOut x = unsafeCoerce# x {-# INLINE happyOut #-} > | coerce > = let > happy_item = str "HappyAbsSyn " . str_tyvars > bhappy_item = brack' happy_item > > inject n ty > = (case ty of > Nothing -> id > Just tystr -> str "newtype " . mkHappyWrap n . str " = " . mkHappyWrap n . strspace . brack tystr . nl) > . mkHappyIn n . str " :: " . typeParam n ty > . str " -> " . bhappy_item . char '\n' > . mkHappyIn n . str " x = Happy_GHC_Exts.unsafeCoerce#" . strspace > . mkHappyWrapCon ty n (str "x") > . nl > . str "{-# INLINE " . mkHappyIn n . str " #-}" > > extract n ty > = mkHappyOut n . str " :: " . bhappy_item > . str " -> " . typeParamOut n ty . char '\n' > . mkHappyOut n . str " x = Happy_GHC_Exts.unsafeCoerce# x\n" > . str "{-# INLINE " . mkHappyOut n . str " #-}" > in > str "newtype " . happy_item . str " = HappyAbsSyn HappyAny\n" -- see NOTE below > . interleave "\n" (map str > [ "#if __GLASGOW_HASKELL__ >= 607", > "type HappyAny = Happy_GHC_Exts.Any", > "#else", > "type HappyAny = forall a . a", > "#endif" ]) > . interleave "\n" > [ inject n ty . nl . extract n ty | (n,ty) <- assocs nt_types ] > -- token injector > . str "happyInTok :: " . token . str " -> " . bhappy_item > . str "\nhappyInTok x = Happy_GHC_Exts.unsafeCoerce# x\n{-# INLINE happyInTok #-}\n" > -- token extractor > . str "happyOutTok :: " . bhappy_item . str " -> " . token > . str "\nhappyOutTok x = Happy_GHC_Exts.unsafeCoerce# x\n{-# INLINE happyOutTok #-}\n" > . str "\n" NOTE: in the coerce case we always coerce all the semantic values to HappyAbsSyn which is declared to be a synonym for Any. This is the type that GHC officially knows nothing about - it's the same type used to implement Dynamic. (in GHC 6.6 and older, Any didn't exist, so we use the closest approximation namely forall a . a). It's vital that GHC doesn't know anything about this type, because it will use any knowledge it has to optimise, and if the knowledge is false then the optimisation may also be false. Previously we used (() -> ()) as the type here, but this led to bogus optimisations (see GHC ticket #1616). Also, note that we must use a newtype instead of just a type synonym, because the otherwise the type arguments to the HappyAbsSyn type constructor will lose information. See happy/tests/bug001 for an example where this matters. ... Otherwise, output the declaration in full... > | otherwise > = str "data HappyAbsSyn " . str_tyvars > . str "\n" . indent . str "= HappyTerminal " . token > . str "\n" . indent . str "| HappyErrorToken Happy_Prelude.Int\n" > . interleave "\n" > [ str "" . indent . str "| " . makeAbsSynCon n . strspace . typeParam n ty > | (n, ty) <- assocs nt_types, > (nt_types_index ! n) == n] > where all_tyvars = [ 't' : show (getName n) | (n, Nothing) <- assocs nt_types ] > str_tyvars = str (unwords all_tyvars) %----------------------------------------------------------------------------- Next, the reduction functions. Each one has the following form: happyReduce_n_m = happyReduce n m reduction where { reduction ( (HappyAbsSynX | HappyTerminal) happy_var_1 : .. (HappyAbsSynX | HappyTerminal) happy_var_q : happyRest) = HappyAbsSynY ( <> ) : happyRest ; reduction _ _ = notHappyAtAll n m where n is the non-terminal number, and m is the rule number. NOTES on monad productions. These look like happyReduce_275 = happyMonadReduce 0# 119# happyReduction_275 happyReduction_275 (happyRest) = happyThen (code) (\r -> happyReturn (HappyAbsSyn r)) why can't we pass the HappyAbsSyn constructor to happyMonadReduce and save duplicating the happyThen/happyReturn in each monad production? Because this would require happyMonadReduce to be polymorphic in the result type of the monadic action, and since in array-based parsers the whole thing is one recursive group, we'd need a type signature on happyMonadReduce to get polymorphic recursion. Sigh. > produceReductions = > interleave "\n\n" > (zipWith produceReduction (drop n_starts prods) [ n_starts .. ]) > produceReduction (Production nt toks (code,vars_used) _) i > | is_monad_prod && (use_monad || imported_identity') > = mkReductionHdr (showInt lt) monad_reduce > . char '(' . interleave (" `HappyStk`\n" ++ indentStr) tokPatterns > . str "happyRest) tk\n" . indent . str " = happyThen (" > . str "(" > . tokLets (char '(' . str code' . char ')') > . str ")" > . (if monad_pass_token then str " tk" else id) > . str "\n" . indent . str ") (\\r -> happyReturn (" . this_absSynCon . str " r))" > | specReduceFun lt > = mkReductionHdr id ("happySpecReduce_" ++ show lt) > . interleave ("\n" ++ indentStr) tokPatterns > . str " = " > . tokLets ( > this_absSynCon . str "\n" . indent . indent . str " " > . char '(' . str code' . str "\n" . indent . str ")" > ) > . (if coerce || null toks || null vars_used then > id > else > nl . reductionFun . strspace > . interleave " " (replicate (length toks) (str "_")) > . str " = notHappyAtAll ") > | otherwise > = mkReductionHdr (showInt lt) "happyReduce" > . char '(' . interleave (" `HappyStk`\n" ++ indentStr) tokPatterns > . str "happyRest)\n" . indent . str " = " > . tokLets > ( this_absSynCon . str "\n" . indent . indent . str " " > . char '(' . str code'. str "\n" . indent . str ") `HappyStk` happyRest" > ) > where > (code', is_monad_prod, monad_pass_token, monad_reduce) > = case code of > '%':'%':code1 -> (code1, True, True, "happyMonad2Reduce") > '%':'^':code1 -> (code1, True, True, "happyMonadReduce") > '%':code1 -> (code1, True, False, "happyMonadReduce") > _ -> (code, False, False, "") > -- adjust the nonterminal number for the array-based parser > -- so that nonterminals start at zero. > adjusted_nt = getName nt - getName first_nonterm' > mkReductionHdr lt' s = > let tysig = case lexer' of > Nothing -> id > _ -> mkReduceFun i . str " :: " . pcont > . str " => " . intMaybeHash > . str " -> " . str token_type' > . str " -> " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " > . happyAbsSyn . str " -> " > . pty . str " " . happyAbsSyn . str "\n" > in tysig . mkReduceFun i . str " = " > . str s . strspace . lt' . strspace . showInt adjusted_nt > . strspace . reductionFun . nl > . reductionFun . strspace > > reductionFun = str "happyReduction_" . shows i > > tokPatterns > | coerce = reverse (map mkDummyVar [1 .. length toks]) > | otherwise = reverse (zipWith tokPattern [1..] toks) > > tokPattern n _ | n `notElem` vars_used = char '_' > tokPattern n t | t >= firstStartTok && t < fst_term > = if coerce > then mkHappyWrapCon (nt_types ! t) t (mkHappyVar n) > else brack' ( > makeAbsSynCon t . str " " . mkHappyVar n > ) > tokPattern n t > = if coerce > then mkHappyTerminalVar n t > else str "(HappyTerminal " > . mkHappyTerminalVar n t > . char ')' > > tokLets code'' > | coerce && not (null cases) > = interleave ("\n"++indentStr) cases > . code'' . str (replicate (length cases) '}') > | otherwise = code'' > > cases = [ str "case " . extract t . strspace . mkDummyVar n > . str " of { " . tokPattern n t . str " -> " > | (n,t) <- zip [1..] toks, > n `elem` vars_used ] > > extract t | t >= firstStartTok && t < fst_term = mkHappyOut t > | otherwise = str "happyOutTok" > > lt = length toks > this_absSynCon | coerce = mkHappyIn nt > | otherwise = makeAbsSynCon nt %----------------------------------------------------------------------------- The token conversion function. > produceTokenConverter > = str "happyTerminalToTok term = case term of {\n" . indent > . (case lexer' of Just (_, eof') -> str eof' . str " -> " . eofTok . str ";\n" . indent; _ -> id) > . interleave (";\n" ++ indentStr) (map doToken token_rep) > . str "_ -> -1#;\n" . indent . str "}\n" -- token number -1# (INVALID_TOK) signals an invalid token > . str "{-# NOINLINE happyTerminalToTok #-}\n" > . str "\n" . > (case lexer' of { > Nothing -> > str "happyLex kend _kmore [] = kend notHappyAtAll []\n" > . str "happyLex _kend kmore (tk:tks) = kmore (happyTerminalToTok tk) tk tks\n" > . str "{-# INLINE happyLex #-}\n" > . str "\n" > . str "happyNewToken action sts stk = happyLex (\\tk -> " . eofAction "notHappyAtAll" . str ") (" > . str "\\i tk -> " . doAction . str " sts stk)\n" > . str "\n" > . str "happyReport " . eofTok . str " tk explist resume tks = happyReport' tks explist resume\n" > . str "happyReport _ tk explist resume tks = happyReport' (tk:tks) explist (\\tks -> resume (Happy_Prelude.tail tks))\n" > -- when the token is EOF, tk == _|_ (notHappyAtAll) > -- so we must not pass it to happyReport' > . str "\n"; > Just (lexer'',eof') -> > str "happyLex kend kmore = " . str lexer'' . str " (\\tk -> case tk of {\n" . indent > . str eof' . str " -> kend tk;\n" . indent > . str "_ -> kmore (happyTerminalToTok tk) tk })\n" > . str "{-# INLINE happyLex #-}\n" > . str "\n" > . str "happyNewToken action sts stk = happyLex (\\tk -> " . eofAction "tk" . str ") (" > . str "\\i tk -> " . doAction . str " sts stk)\n" > . str "\n" > . str "happyReport " . eofTok . str " = happyReport'\n" > . str "happyReport _ = happyReport'\n" > -- superfluous pattern match needed to force happyReport to > -- have the correct type. > . str "\n"; > }) > where > eofAction tk = str "happyDoAction " > . eofTok . strspace . str tk > . str " action sts stk" > eofTok = showInt (tokIndex eof) > doAction = str "happyDoAction i tk action" > doToken (i,tok) = str (removeDollarDollar tok) . str " -> " . showInt (tokIndex i) Use a variable rather than '_' to replace '$$', so we can use it on the left hand side of '@'. > removeDollarDollar tok = case tok of > TokenFixed t -> t > TokenWithValue e -> substExpressionWithHole e "happy_dollar_dollar" > mkHappyTerminalVar :: Int -> Name -> String -> String > mkHappyTerminalVar i t = > case lookup t token_rep of > Nothing -> pat > Just (TokenFixed _) -> pat > Just (TokenWithValue e) -> brack $ substExpressionWithHole e $ pat [] > where > pat = mkHappyVar i > tokIndex i = getName i - getName fst_term + 2 -- +2: errorTok, catchTok %----------------------------------------------------------------------------- Action Tables. Here we do a bit of trickery and replace the normal default action (failure) for each state with at least one reduction action. For each such state, we pick one reduction action to be the default action. This should make the code smaller without affecting the speed. It changes the sematics for errors, however; errors could be detected in a different state now (but they'll still be detected at the same point in the token stream). SG: For a data point, in issue93 the happyTable triples in size when we always pick failure as the default reduction. Presumably that is because there are quite a few reduction states, in which the only non-default transition is a reduction. Our scheme above ensures that these states don't occupy space in the main happyTable at all; they just get an entry in the happyDefActions. Further notes on default cases: Default reductions are important when error recovery is considered: we don't allow reductions whilst in error recovery, so we'd like the parser to automatically reduce down to a state where the error token can be shifted before entering error recovery. This is achieved by using default reductions wherever possible. One case to consider is: State 345 con -> conid . (rule 186) qconid -> conid . (rule 212) error reduce using rule 212 '{' reduce using rule 186 etc. we should make reduce_212 the default reduction here. So the rules become: * if there is a production error -> reduce_n then make reduce_n the default action. * if there is a non-reduce action for the error token, the default action for this state must be "fail". * otherwise pick the most popular reduction in this state for the default. * if there are no reduce actions in this state, then the default action remains 'enter error recovery'. This gives us an invariant: there won't ever be a production of the type 'error -> reduce_n' explicitly in the grammar, which means that whenever an unexpected token occurs, either the parser will reduce straight back to a state where the error token can be shifted, or if none exists, we'll get a parse error. In theory, we won't need the machinery to discard states in the parser... > produceActionTable > = produceActionArray > . produceReduceArray > . produceRuleArray > . produceCatchStates > . str "happy_n_terms = " . shows n_terminals . str " :: Happy_Prelude.Int\n" > . str "happy_n_nonterms = " . shows n_nonterminals . str " :: Happy_Prelude.Int\n\n" > . str "happy_n_starts = " . shows n_starts . str " :: Happy_Prelude.Int\n\n" > > produceTokToStringList > = str "{-# NOINLINE happyTokenStrings #-}\n" > . str "happyTokenStrings = " . shows (drop (getName fst_term - 1) (elems token_names')) . str "\n" > -- fst_term - 1: fst_term includes eofToken, but that is last in the list. > . str "\n" action array indexed by (terminal * last_state) + state > produceActionArray > = str "happyActOffsets :: HappyAddr\n" > . str "happyActOffsets = HappyA# \"" --" > . hexChars act_offs > . str "\"#\n\n" --" > > . str "happyGotoOffsets :: HappyAddr\n" > . str "happyGotoOffsets = HappyA# \"" --" > . hexChars goto_offs > . str "\"#\n\n" --" > > . str "happyDefActions :: HappyAddr\n" > . str "happyDefActions = HappyA# \"" --" > . hexChars defaults > . str "\"#\n\n" --" > > . str "happyCheck :: HappyAddr\n" > . str "happyCheck = HappyA# \"" --" > . hexChars check > . str "\"#\n\n" --" > > . str "happyTable :: HappyAddr\n" > . str "happyTable = HappyA# \"" --" > . hexChars table > . str "\"#\n\n" --" > n_terminals = length terms > n_nonterminals = length nonterms - n_starts -- lose %starts > > (act_offs,goto_offs,table,defaults,check,catch_states) > = mkTables action goto first_nonterm' fst_term > n_terminals n_nonterminals n_starts > > produceReduceArray > = str "happyReduceArr = Happy_Data_Array.array (" > . shows (n_starts :: Int) -- omit the %start reductions > . str ", " > . shows n_rules > . str ") [\n" > . interleave' ",\n" (map reduceArrElem [n_starts..n_rules]) > . str "\n" . indent . str "]\n\n" > > produceRuleArray -- rule number to (non-terminal number, rule length) > = str "happyRuleArr :: HappyAddr\n" > . str "happyRuleArr = HappyA# \"" -- " > . hexChars (concatMap (\(nt,len) -> [nt,len]) ruleArrElems) > . str "\"#\n\n" --" > > ruleArrElems = map (\(Production nt toks _code _prio) -> (getName nt - getName first_nonterm',length toks)) (drop n_starts prods) > > n_rules = length prods - 1 :: Int > > produceCatchStates > = str "happyCatchStates :: [Happy_Prelude.Int]\n" > . str "happyCatchStates = " . shows catch_states . str "\n\n" > showInt i = shows i . showChar '#' This lets examples like: data HappyAbsSyn t1 = HappyTerminal ( HaskToken ) | HappyAbsSyn1 ( HaskExp ) | HappyAbsSyn2 ( HaskExp ) | HappyAbsSyn3 t1 *share* the definition for ( HaskExp ) data HappyAbsSyn t1 = HappyTerminal ( HaskToken ) | HappyAbsSyn1 ( HaskExp ) | HappyAbsSyn3 t1 ... cutting down on the work that the type checker has to do. Note, this *could* introduce lack of polymophism, for types that have alphas in them. Maybe we should outlaw them inside { } > nt_types_index :: Array Name Name > nt_types_index = array (bounds nt_types) > [ (a, fn a b) | (a, b) <- assocs nt_types ] > where > fn n Nothing = n > fn _ (Just a) = fromMaybe (error "can't find an item in list") (lookup a assoc_list) > assoc_list = [ (b,a) | (a, Just b) <- assocs nt_types ] > makeAbsSynCon = mkAbsSynCon nt_types_index > produceIdentityStuff | use_monad = id > | imported_identity' = > str "type HappyIdentity = Identity\n" > . str "happyIdentity = Identity\n" > . str "happyRunIdentity = runIdentity\n\n" > | otherwise = > str "newtype HappyIdentity a = HappyIdentity a\n" > . str "happyIdentity = HappyIdentity\n" > . str "happyRunIdentity (HappyIdentity a) = a\n\n" > . str "instance Happy_Prelude.Functor HappyIdentity where\n" > . str " fmap f (HappyIdentity a) = HappyIdentity (f a)\n\n" > . str "instance Applicative HappyIdentity where\n" > . str " pure = HappyIdentity\n" > . str " (<*>) = ap\n" > . str "instance Happy_Prelude.Monad HappyIdentity where\n" > . str " return = pure\n" > . str " (HappyIdentity p) >>= q = q p\n\n" MonadStuff: - with no %monad or %lexer: happyThen :: () => HappyIdentity a -> (a -> HappyIdentity b) -> HappyIdentity b happyReturn :: () => a -> HappyIdentity a happyThen1 m k tks = happyThen m (\a -> k a tks) happyFmap1 f m tks = happyThen (m tks) (\a -> happyReturn (f a)) happyReturn1 = \a tks -> happyReturn a - with %monad: happyThen :: CONTEXT => P a -> (a -> P b) -> P b happyReturn :: CONTEXT => a -> P a happyThen1 m k tks = happyThen m (\a -> k a tks) happyFmap1 f m tks = happyThen (m tks) (\a -> happyReturn (f a)) happyReturn1 = \a tks -> happyReturn a - with %monad & %lexer: happyThen :: CONTEXT => P a -> (a -> P b) -> P b happyReturn :: CONTEXT => a -> P a happyThen1 = happyThen happyReturn1 = happyReturn happyFmap1 f m = happyThen m (\a -> happyReturn (f a)) > produceMonadStuff = > str "happyThen :: " . pcont . str " => " . ptyAt (str "a") > . str " -> (a -> " . ptyAt (str "b") > . str ") -> " . ptyAt (str "b") . str "\n" > . str "happyThen = " . brack monad_then . nl > . str "happyReturn :: " . pcont . str " => a -> " . ptyAt (str "a") . str "\n" > . str "happyReturn = " . brack monad_return . nl > . case lexer' of > Nothing -> > str "happyThen1 m k tks = (" . str monad_then > . str ") m (\\a -> k a tks)\n" > . str "happyFmap1 f m tks = happyThen (m tks) (\\a -> happyReturn (f a))\n" > . str "happyReturn1 :: " . pcont . str " => a -> b -> " . ptyAt (str "a") . str "\n" > . str "happyReturn1 = \\a tks -> " . brack monad_return > . str " a\n" > . str "happyReport' :: " . pcont . str " => " > . str "[" . token . str "] -> " > . str "[Happy_Prelude.String] -> (" > . str "[" . token . str "] -> " > . ptyAt (str "a") . str ") -> " > . ptyAt (str "a") > . str "\n" > . str "happyReport' = " . callReportError . str "\n" > . str "\n" > . str "happyAbort :: " . pcont . str " => " > . str "[" . token . str "] -> " > . ptyAt (str "a") > . str "\n" > . str "happyAbort = " . str abort_handler . str "\n" > . str "\n" > _ -> > let > happyDoParseSig = > str "happyDoParse :: " . pcont . str " => " . intMaybeHash > . str " -> " . pty . str " " . happyAbsSyn . str "\n" > . str "\n" > newTokenSig = > str "happyNewToken :: " . pcont . str " => " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " . happyAbsSyn > . str " -> " . ptyAt happyAbsSyn . str"\n" > . str "\n" > doActionSig = > str "happyDoAction :: " . pcont . str " => " . intMaybeHash > . str " -> " . str token_type' . str " -> " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " . happyAbsSyn > . str " -> " . ptyAt happyAbsSyn . str "\n" > . str "\n" > reduceArrSig = > str "happyReduceArr :: " . pcont > . str " => Happy_Data_Array.Array Happy_Prelude.Int (" . intMaybeHash > . str " -> " . str token_type' . str " -> " . intMaybeHash > . str " -> Happy_IntList -> HappyStk " . happyAbsSyn > . str " -> " . ptyAt happyAbsSyn . str ")\n" > . str "\n" > in happyDoParseSig . newTokenSig . doActionSig . reduceArrSig > . str "happyThen1 :: " . pcont . str " => " . pty > . str " a -> (a -> " . pty > . str " b) -> " . pty . str " b\n" > . str "happyThen1 = happyThen\n" > . str "happyFmap1 f m = happyThen m (\\a -> happyReturn (f a))\n" > . str "happyReturn1 :: " . pcont . str " => a -> " . ptyAt (str "a") . str "\n" > . str "happyReturn1 = happyReturn\n" > . str "happyReport' :: " . pcont . str " => " > . token . str " -> " > . str "[Happy_Prelude.String] -> " > . ptyAt (str "a") . str " -> " > . ptyAt (str "a") > . str "\n" > . str "happyReport' = " . callReportError . str "\n" > . str "\n" > . str "happyAbort :: " . pcont . str " => " > . ptyAt (str "a") > . str "\n" > . str "happyAbort = " . str abort_handler . str "\n" > . str "\n" The error handler takes up to three arguments. An error handler specified with %error is passed the current token when used with %lexer as the first argument, but happyError (the old way but kept for compatibility) is not passed the current token. Furthermore, the second argument is the list of expected tokens in the presence of the %error.expected directive. The last argument is the "resumption", a continuation that tries to find an item on the stack taking a @catch@ terminal where parsing may resume, in the presence of the two-argument form of the %error directive. In order to support the legacy %errorhandlertype directive, we retain have a special code path for `OldExpected`. > callReportError = -- this one wraps around report_error_handler to expose a unified interface > str "(\\tokens expected resume -> " . > (if use_monad then str "" > else str "HappyIdentity Happy_Prelude.$ ") . > report_error_handler . > (case error_expected' of > OldExpected -> str " (tokens, expected)" -- back-compat for %errorhandlertype > _ -> > (case (error_handler', lexer') of (DefaultErrorHandler, Just _) -> id > _ -> str " tokens") . > (case error_expected' of NewExpected -> str " expected" > NoExpected -> id)) . > (case error_handler' of ResumptiveErrorHandler{} -> str " resume" > _ -> id) . > str ")" > report_error_handler = case error_handler' of > DefaultErrorHandler -> str "happyError" > CustomErrorHandler h -> brack h > ResumptiveErrorHandler _abort report -> brack report > abort_handler = case error_handler' of > ResumptiveErrorHandler abort _report -> abort > _ -> "Happy_Prelude.error \"Called abort handler in non-resumptive parser\"" > reduceArrElem n > = str "" . indent . str "(" . shows n . str " , " > . str "happyReduce_" . shows n . char ')' ----------------------------------------------------------------------------- -- Produce the parser entry and exit points > produceEntries > = interleave "\n\n" (map produceEntry (zip starts' [0..])) > . case mAg of > Nothing -> id > Just ag -> produceAttrEntries ag starts' > produceEntry :: ((String, t0, Name, t1), Int) -> String -> String > produceEntry ((name, _start_nonterm, accept_nonterm, _partial), no) > = (if isNothing mAg then str name else str "do_" . str name) > . maybe_tks > . str " = " > . str unmonad > . str "happySomeParser where\n" > . str " happySomeParser = happyThen (happyDoParse " > . shows no . str "#" > . maybe_tks > . str ") " > . brack' (if coerce > then str "\\x -> happyReturn (let {" . mkHappyWrapCon (nt_types ! accept_nonterm) accept_nonterm (str "x'") > . str " = " . mkHappyOut accept_nonterm . str " x} in x')" > else str "\\x -> case x of {HappyAbsSyn" > . showsName (nt_types_index ! accept_nonterm) > . str " z -> happyReturn z; _other -> notHappyAtAll }" > ) > where > maybe_tks | isNothing lexer' = str " tks" > | otherwise = id > unmonad | use_monad = "" > | otherwise = "happyRunIdentity " > produceAttrEntries ag starts'' > = interleave "\n\n" (map f starts'') > where > f = case (use_monad,lexer') of > (True,Just _) -> \(name,_,_,_) -> monadAndLexerAE name > (True,Nothing) -> \(name,_,_,_) -> monadAE name > (False,Just _) -> error "attribute grammars not supported for non-monadic parsers with %lexer" > (False,Nothing)-> \(name,_,_,_) -> regularAE name > > defaultAttr = fst (head $ attributes ag) > > monadAndLexerAE name > = str name . str " = " > . str "do { " > . str "f <- do_" . str name . str "; " > . str "let { (conds,attrs) = f happyEmptyAttrs } in do { " > . str "Happy_Prelude.sequence_ conds; " > . str "Happy_Prelude.return (". str defaultAttr . str " attrs) }}" > monadAE name > = str name . str " toks = " > . str "do { " > . str "f <- do_" . str name . str " toks; " > . str "let { (conds,attrs) = f happyEmptyAttrs } in do { " > . str "Happy_Prelude.sequence_ conds; " > . str "Happy_Prelude.return (". str defaultAttr . str " attrs) }}" > regularAE name > = str name . str " toks = " > . str "let { " > . str "f = do_" . str name . str " toks; " > . str "(conds,attrs) = f happyEmptyAttrs; " > . str "x = Happy_Prelude.foldr Happy_GHC_Exts.seq attrs conds; " > . str "} in (". str defaultAttr . str " x)" ---------------------------------------------------------------------------- -- Produce attributes declaration for attribute grammars > produceAttributes :: AttributeGrammarExtras -> String -> String > produceAttributes AttributeGrammarExtras { > attributes = attrs, > attributetype = attributeType > } > = str "data " . attrHeader . str " = HappyAttributes {" . attributes' . str "}" . nl > . str "happyEmptyAttrs = HappyAttributes {" . attrsErrors . str "}" . nl > where attributes' = foldl1 (\x y -> x . str ", " . y) $ map formatAttribute attrs > formatAttribute (ident,typ) = str ident . str " :: " . str typ > attrsErrors = foldl1 (\x y -> x . str ", " . y) $ map attrError attrs > attrError (ident,_) = str ident . str " = Happy_Prelude.error \"invalid reference to attribute '" . str ident . str "'\"" > attrHeader = > case attributeType of > [] -> str "HappyAttributes" > _ -> str attributeType ----------------------------------------------------------------------------- -- Strict or non-strict parser > produceStrict :: Bool -> String -> String > produceStrict strict > | strict = str "happySeq = happyDoSeq\n\n" > | otherwise = str "happySeq = happyDontSeq\n\n" ----------------------------------------------------------------------------- Replace all the $n variables with happy_vars, and return a list of all the vars used in this piece of code. > actionVal :: LRAction -> Int > actionVal (LR'Shift state _) = state + 1 > actionVal (LR'Reduce rule _) = -(rule + 1) > actionVal LR'Accept = -1 > actionVal (LR'Multiple _ a) = actionVal a > actionVal LR'Fail = 0 > actionVal LR'MustFail = 0 See notes under "Action Tables" above for some subtleties in this function. > getDefault :: [(Name, LRAction)] -> LRAction > getDefault actions > -- pick out the action for the error token, if any > | (act : _) <- error_acts, act /= LR'Fail > = case act of > > -- use error reduction as the default action, if there is one. > LR'Reduce _ _ -> act > LR'Multiple _ (LR'Reduce _ _) -> act > > -- if the error token is shifted or otherwise, don't generate > -- a default reduction action. This is *important*! > _ -> LR'Fail > > -- do not reduce by default in a state that could shift the catch token. > -- otherwise upon an error, we discard viable resumption points from the > -- parsing stack. > -- This makes a difference on GHC's parser for input such as > -- f = foo data; x = + blah > -- where we must detect `data` as a parse error early enough to parse > -- `foo data` as an application > | (LR'Shift{} : _) <- catch_acts > = LR'Fail > | (LR'Multiple _ LR'Shift{} : _) <- catch_acts > = LR'Fail > > | otherwise > -- no error or catch actions, pick a reduce to be the default. > = case reduces of > _ -> case reduces of > [] -> LR'Fail > (act:_) -> act -- pick the first one we see for now > > where > error_acts = [ act | (e, act) <- actions, e == errorTok ] > catch_acts = [ act | (e, act) <- actions, e == catchTok ] > reduces > = [ act | (_, act@(LR'Reduce _ _)) <- actions ] > ++ [ act | (_, LR'Multiple _ act@(LR'Reduce _ _)) <- actions ] ----------------------------------------------------------------------------- -- Generate packed parsing tables. -- happyActOff ! state -- Offset within happyTable of actions for state -- happyGotoOff ! state -- Offset within happyTable of gotos for state -- happyTable -- Combined action/goto table -- happyDefAction ! state -- Default action for state -- happyCheck -- Indicates whether we should use the default action for state -- the table is laid out such that the action for a given state & token -- can be found by: -- -- off = happyActOff ! state -- off_i = off + token -- check | off_i => 0 = (happyCheck ! off_i) == token -- | otherwise = False -- action | check = happyTable ! off_i -- | otherwise = happyDefAaction ! off_i -- figure out the default action for each state. This will leave some -- states with no *real* actions left. -- for each state with one or more real actions, sort states by -- width/spread of tokens with real actions, then by number of -- elements with actions, so we get the widest/densest states -- first. (I guess the rationale here is that we can use the -- thin/sparse states to fill in the holes later, and also we -- have to do less searching for the more complicated cases). -- try to pair up states with identical sets of real actions. -- try to fit the actions into the check table, using the ordering -- from above. SG: If you want to know more about similar compression schemes, consult Storing a Sparse Table (https://dl.acm.org/doi/10.1145/359168.359175) One can think of the mapping @\(state,token) -> (offs ! state)+token@ as a hash and @check@ as the way to detect "collisions" (i.e., default entries). > mkTables > :: ActionTable -> GotoTable -> Name -> Name -> Int -> Int -> Int -> > ( [Int] -- happyActOffsets > , [Int] -- happyGotoOffsets > , [Int] -- happyTable > , [Int] -- happyDefAction > , [Int] -- happyCheck > , [Int] -- happyCatchStates > ) > > mkTables action goto first_nonterm' fst_term > n_terminals n_nonterminals n_starts > > = ( elems act_offs > , elems goto_offs > , take max_off (elems table) > , def_actions > , take max_off (elems check) > , shifted_catch_states > ) > where > > (table,check,act_offs,goto_offs,max_off) > = runST (genTables (length actions) > max_token > sorted_actions) > > -- the maximum token number used in the parser > max_token = max n_terminals (n_starts+n_nonterminals) - 1 > > def_actions = map (\(_,_,def,_,_,_) -> def) actions > > actions :: [TableEntry] > actions = > [ (ActionEntry, > state, > actionVal default_act, > if null acts'' then 0 > else fst (last acts'') - fst (head acts''), > length acts'', > acts'') > | (state, acts) <- assocs action, > let (err:catch:_dummy:vec) = assocs acts > vec' = drop (n_starts+n_nonterminals) vec > acts' = filter notFail (err:catch:vec') > default_act = getDefault acts' > acts'' = mkActVals acts' default_act > ] > > shifted_catch_states :: [Int] > shifted_catch_states = -- collect the states in which we have just shifted a catchTok > nub [ to_state | (_from_state, acts) <- assocs action > , let (_err:catch:_) = assocs acts > , (_tok, LR'Shift to_state _) <- return catch ] > > -- adjust terminals by -(fst_term+2), so they start at 2 (error is 0, catch is 1). > -- (see ARRAY_NOTES) > adjust :: Name -> Int > adjust token | token == errorTok = 0 > | token == catchTok = 1 > | otherwise = getName token - getName fst_term + 2 > > mkActVals :: [(Name, LRAction)] -> LRAction -> [(Int, Int)] > mkActVals assocs' default_act = > [ (adjust token, actionVal act) > | (token, act) <- assocs' > , act /= default_act ] > > gotos :: [TableEntry] > gotos = [ (GotoEntry, > state, 0, > if null goto_vals then 0 > else fst (last goto_vals) - fst (head goto_vals), > length goto_vals, > goto_vals > ) > | (state, goto_arr) <- assocs goto, > let goto_vals = mkGotoVals (assocs goto_arr) > ] > > -- adjust nonterminals by -first_nonterm', so they start at zero > -- (see ARRAY_NOTES) > mkGotoVals assocs' = > [ (getName token - getName first_nonterm', i) | (token, Goto i) <- assocs' ] > > sorted_actions = sortBy (flip cmp_state) (actions ++ gotos) > cmp_state (_,_,_,width1,tally1,_) (_,_,_,width2,tally2,_) > | width1 < width2 = LT > | width1 == width2 = compare tally1 tally2 > | otherwise = GT > data ActionOrGoto = ActionEntry | GotoEntry > type TableEntry = ( ActionOrGoto > , Int {-stateno-} > , Int {-default-} > , Int {-width-} > , Int {-tally-} > , [(Int,Int)] > ) > genTables > :: Int -- number of actions > -> Int -- maximum token no. > -> [TableEntry] -- entries for the table > -> ST s ( UArray Int Int -- table > , UArray Int Int -- check > , UArray Int Int -- action offsets > , UArray Int Int -- goto offsets > , Int -- highest offset in table > ) > > genTables n_actions max_token entries = do > > table <- newArray (0, mAX_TABLE_SIZE) 0 > check <- newArray (0, mAX_TABLE_SIZE) (-1) > act_offs <- newArray (0, n_actions) 0 > goto_offs <- newArray (0, n_actions) 0 > off_arr <- newArray (-max_token, mAX_TABLE_SIZE) 0 > > max_off <- genTables' table check act_offs goto_offs off_arr entries > max_token > > table' <- freeze table > check' <- freeze check > act_offs' <- freeze act_offs > goto_offs' <- freeze goto_offs > return (table',check',act_offs',goto_offs',max_off+1) > where > n_states = n_actions - 1 > mAX_TABLE_SIZE = n_states * (max_token + 1) > genTables' > :: STUArray s Int Int -- table > -> STUArray s Int Int -- check > -> STUArray s Int Int -- action offsets > -> STUArray s Int Int -- goto offsets > -> STUArray s Int Int -- offset array > -> [TableEntry] -- entries for the table > -> Int -- maximum token no. > -> ST s Int -- highest offsets in table > > genTables' table check act_offs goto_offs off_arr entries > max_token > = fit_all entries 0 1 > where > > fit_all [] max_off _ = return max_off > fit_all (s:ss) max_off fst_zero = do > (off, new_max_off, new_fst_zero) <- fit s max_off fst_zero > ss' <- same_states s ss off > writeArray off_arr off 1 > fit_all ss' new_max_off new_fst_zero > > -- try to merge identical states. We only try the next state(s) > -- in the list, but the list is kind-of sorted so we shouldn't > -- miss too many. > same_states _ [] _ = return [] > same_states s@(_,_,_,_,_,acts) ss@((e,no,_,_,_,acts'):ss') off > | acts == acts' = do writeArray (which_off e) no off > same_states s ss' off > | otherwise = return ss > > which_off ActionEntry = act_offs > which_off GotoEntry = goto_offs > > -- fit a vector into the table. Return the offset of the vector, > -- the maximum offset used in the table, and the offset of the first > -- entry in the table (used to speed up the lookups a bit). > fit (_,_,_,_,_,[]) max_off fst_zero = return (0,max_off,fst_zero) > > fit (act_or_goto, state_no, _deflt, _, _, state@((t,_):_)) > max_off fst_zero = do > -- start at offset 1 in the table: all the empty states > -- (states with just a default reduction) are mapped to > -- offset zero. > off <- findFreeOffset (-t+fst_zero) check off_arr state > let new_max_off | furthest_right > max_off = furthest_right > | otherwise = max_off > furthest_right = off + max_token > > -- trace ("fit: state " ++ show state_no ++ ", off " ++ show off ++ ", elems " ++ show state) $ do > > writeArray (which_off act_or_goto) state_no off > addState off table check state > new_fst_zero <- findFstFreeSlot check fst_zero > return (off, new_max_off, new_fst_zero) When looking for a free offset in the table, we use the 'check' table rather than the main table. The check table starts off with (-1) in every slot, because that's the only thing that doesn't overlap with any tokens (non-terminals start at 0, terminals start at 1). Because we use 0 for LR'MustFail as well as LR'Fail, we can't check for free offsets in the main table because we can't tell whether a slot is free or not. > -- Find a valid offset in the table for this state. > findFreeOffset :: Int -> STUArray s Int Int -> STUArray s Int Int -> [(Int, Int)] -> ST s Int > findFreeOffset off table off_arr state = do > -- offset 0 isn't allowed > if off == 0 then try_next else do > > -- don't use an offset we've used before > b <- readArray off_arr off > if b /= 0 then try_next else do > > -- check whether the actions for this state fit in the table > ok <- fits off state table > if not ok then try_next else return off > where > try_next = findFreeOffset (off+1) table off_arr state > fits :: Int -> [(Int,Int)] -> STUArray s Int Int -> ST s Bool > fits _ [] _ = return True > fits off ((t,_):rest) table = do > i <- readArray table (off+t) > if i /= -1 then return False > else fits off rest table > addState :: Int -> STUArray s Int Int -> STUArray s Int Int -> [(Int, Int)] > -> ST s () > addState _ _ _ [] = return () > addState off table check ((t,val):state) = do > writeArray table (off+t) val > writeArray check (off+t) t > addState off table check state > notFail :: (Name, LRAction) -> Bool > notFail (_, LR'Fail) = False > notFail _ = True > findFstFreeSlot :: STUArray s Int Int -> Int -> ST s Int > findFstFreeSlot table n = do > i <- readArray table n > if i == -1 then return n > else findFstFreeSlot table (n+1) ----------------------------------------------------------------------------- -- Misc. > showsName :: Name -> ShowS > showsName = shows . getName > comment :: String > comment = > "-- parser produced by Happy Version " ++ showVersion version ++ "\n\n" > mkAbsSynCon :: Array Name Name -> Name -> String -> String > mkAbsSynCon fx t = str "HappyAbsSyn" . showsName (fx ! t) > mkHappyVar, mkReduceFun, mkDummyVar :: Int -> String -> String > mkHappyVar n = str "happy_var_" . shows n > mkReduceFun n = str "happyReduce_" . shows n > mkDummyVar n = str "happy_x_" . shows n > mkHappyWrap :: Name -> String -> String > mkHappyWrap n = str "HappyWrap" . showsName n > mkHappyWrapCon :: Maybe a -> Name -> (String -> String) -> String -> String > mkHappyWrapCon Nothing _ s = s > mkHappyWrapCon (Just _) n s = brack' (mkHappyWrap n . strspace . s) > mkHappyIn, mkHappyOut :: Name -> String -> String > mkHappyIn n = str "happyIn" . showsName n > mkHappyOut n = str "happyOut" . showsName n > typeParam, typeParamOut :: Name -> Maybe String -> ShowS > typeParam n Nothing = char 't' . showsName n > typeParam _ (Just ty) = brack ty > typeParamOut n Nothing = char 't' . showsName n > typeParamOut n (Just _) = mkHappyWrap n > specReduceFun :: Int -> Bool > specReduceFun = (<= 3) ------------------------------------------------------------------------------- -- Fast string-building functions. > str :: String -> String -> String > str = showString > char :: Char -> String -> String > char c = (c :) > interleave :: String -> [String -> String] -> String -> String > interleave s = foldr (\a b -> a . str s . b) id > interleave' :: String -> [String -> String] -> String -> String > interleave' s = foldr1 (\a b -> a . str s . b) > strspace :: String -> String > strspace = char ' ' > nl :: String -> String > nl = char '\n' > maybestr :: Maybe String -> String -> String > maybestr (Just s) = str s > maybestr _ = id > brack :: String -> String -> String > brack s = str ('(' : s) . char ')' > brack' :: (String -> String) -> String -> String > brack' s = char '(' . s . char ')' ----------------------------------------------------------------------------- -- Convert an integer to a 32-bit number encoded in little-endian -- \xNN\xNN\xNN\xNN format suitable for placing in a string. > hexChars :: [Int] -> String -> String > hexChars is s = foldr (hexChar . toInt32) s is The following definition of @hexChar@ chooses a little endian encoding for `Int32` . Ergo, the compiled parser must use the same endianness when decoding array entries. On big endian architectures, this means users will have to compile with `WORDS_BIGENDIAN`, which is defined in the GHC provided C header `MachDeps.h`. > hexChar :: Int32 -> String -> String > hexChar i s = foldr (toHex . byte i) s [0,1,2,3] > byte :: Int32 -> Int -> Word8 > byte n i = fromIntegral (0xFF .&. shiftR n (i*8)) > toHex :: Word8 -> String -> String > toHex i s = '\\':'x':hexDig (0xF .&. shiftR i 4):hexDig (0xF .&. i):s > hexDig :: Word8 -> Char > hexDig i | i <= 9 = chr (fromIntegral i + ord '0') > | otherwise = chr (fromIntegral i - 10 + ord 'a') > toInt32 :: Int -> Int32 > toInt32 i > | i == fromIntegral i32 = i32 > | otherwise = error ("offset was too large for Int32: " ++ show i) > where i32 = fromIntegral i happy-lib-2.1.7/data/0000755000000000000000000000000007346545000012510 5ustar0000000000000000happy-lib-2.1.7/data/GLR_Base.hs0000644000000000000000000000470207346545000014425 0ustar0000000000000000{- GLR_Base.lhs $Id: GLR_Base.lhs,v 1.4 2004/12/04 15:01:37 paulcc Exp $ -} -- Basic defs required for compiling the data portion of the parser -- We're creating Int-indexed graphs type ForestId = (Int,Int,GSymbol) -- Actions for the GLR machine data GLRAction = Shift Int [Reduction] | Reduce [Reduction] | Accept | Error --- -- A Reduction (s,n,f) removes the top n node-ids, creates a new branch from these -- and labels the branch with the given symbol s. Additionally, the branch may -- hold some semantic value. type Reduction = (GSymbol,Int, [ForestId] -> Branch) --- -- A Branch holds the semantic result plus node ids of children data Branch = Branch {b_sem :: GSem, b_nodes :: [ForestId]} deriving Show instance Eq Branch where b1 == b2 = b_nodes b1 == b_nodes b2 ------------------------------------------------------------------------------- -- Utilities for decoding --- -- Tree decode unpacks the forest into a list of results -- - this is ok for small examples, but inefficient for very large examples -- - the data file contains further instances -- - see documentation for further information -- - "Decode_Result" is a synonym used to insert the monad type constr (or not) class TreeDecode a where decode_b :: (ForestId -> [Branch]) -> Branch -> [Decode_Result a] decode :: TreeDecode a => (ForestId -> [Branch]) -> ForestId -> [Decode_Result a] decode f i@(_,_,HappyTok t) = decode_b f (Branch (SemTok t) []) decode f i = [ d | b <- f i, d <- decode_b f b ] ---- generated by Happy, since it means expansion of synonym (not ok in H-98) --instance TreeDecode UserDefTok where -- decode_b f (Branch (SemTok t) []) = [happy_return t] --- -- this is used to multiply the ambiguous possibilities from children --cross_fn :: [a -> b] -> [a] -> [b] --actual type will depend on monad in use. --happy_ap defined by parser generator cross_fn fs as = [ f `happy_ap` a | f <- fs, a <- as] --- -- Label decoding unpacks from the Semantic wrapper type -- - this allows arbitrary values (within the limits of the compiler settings) -- to be recovered from nodes in the tree. -- - again, more instances are written in the data file -- - see documentation for further information class LabelDecode a where unpack :: GSem -> a ---- generated by Happy, since it means expansion of synonym (not ok in H-98) --instance LabelDecode UserDefTok where -- unpack (SemTok t) = t happy-lib-2.1.7/data/GLR_Lib.hs0000644000000000000000000003135307346545000014263 0ustar0000000000000000{- GLR_Lib.lhs $Id: GLR_Lib.lhs,v 1.5 2005/08/03 13:42:23 paulcc Exp $ -} {- Parser driver for the GLR parser. (c) University of Durham, Ben Medlock 2001 -- initial code, for structure parsing (c) University of Durham, Paul Callaghan 2004-05 -- extension to semantic rules -- shifting to chart data structure -- supporting hidden left recursion -- many optimisations -} {- supplied by Happy <> module XYZ ( <> lexer -- conditional -} -- probable, but might want to parametrise , doParse , TreeDecode(..), decode -- only for tree decode , LabelDecode(..) -- only for label decode -- standard exports , Tokens , GLRResult(..) , NodeMap , RootNode , ForestId , GSymbol(..) , Branch(..) , GSem(..) ) where #if !defined(__GLASGOW_HASKELL__) # error This code isn't being built with GHC. #endif import Data.Char import qualified Data.Map as Map import Control.Applicative (Applicative(..)) import Control.Monad (foldM, ap) import Data.Maybe (fromJust) import Data.List (insertBy, nub, maximumBy, partition, find, groupBy, delete) import GHC.Prim import GHC.Exts #if defined(HAPPY_DEBUG) import System.IO import System.IO.Unsafe import Text.PrettyPrint #endif {- these inserted by Happy -} fakeimport DATA {- borrowed from GenericTemplate.hs -} #define ILIT(n) n# #define BANG ! #define IBOX(n) (I# (n)) #define FAST_INT Int# #if __GLASGOW_HASKELL__ >= 708 #define ULT(n,m) (isTrue# (n <# m)) #define GTE(n,m) (isTrue# (n >=# m)) #define UEQ(n,m) (isTrue# (n ==# m)) #else #define ULT(n,m) (n <# m) #define GTE(n,m) (n >=# m) #define UEQ(n,m) (n ==# m) #endif #define PLUS(n,m) (n +# m) #define MINUS(n,m) (n -# m) #define TIMES(n,m) (n *# m) #define NEGATE(n) (negateInt# (n)) #define IF_GHC(x) (x) #if defined(HAPPY_DEBUG) #define DEBUG_TRACE(s) (happyTrace (s) $ return ()) happyTrace string expr = unsafePerformIO $ do hPutStr stderr string return expr #else #define DEBUG_TRACE(s) {- nothing -} #endif doParse = glr_parse ---------------------------------------------------------------------------- -- Main data types -- A forest is a map of `spans' to branches, where a span is a start position, -- and end position, and a grammatical category for that interval. Branches -- are lists of conjunctions of symbols which can be matched in that span. -- Note that tokens are stored as part of the spans. type Forest = Map.Map ForestId [Branch] --- -- End result of parsing: -- - successful parse with rooted forest -- - else syntax error or premature eof type NodeMap = [(ForestId, [Branch])] type RootNode = ForestId type Tokens = [[(Int, GSymbol)]] -- list of ambiguous lexemes data GLRResult = ParseOK RootNode Forest -- forest with root | ParseError Tokens Forest -- partial forest with bad input | ParseEOF Forest -- partial forest (missing input) ----------------------- -- Forest to simplified output forestResult :: Int -> Forest -> GLRResult forestResult length f = case roots of [] -> ParseEOF f [r] -> ParseOK r f rs@(_:_) -> error $ "multiple roots in forest, = " ++ show rs ++ unlines (map show ns_map) where ns_map = Map.toList f roots = [ r | (r@(0,sz,sym),_) <- ns_map , sz == length , sym == top_symbol ] ---------------------------------------------------------------------------- glr_parse :: [[UserDefTok]] -> GLRResult glr_parse toks = case runST Map.empty [0..] (tp toks) of (f,Left ts) -> ParseError ts f -- Error within sentence (f,Right ss) -> forestResult (length toks) f -- Either good parse or EOF where tp tss = doActions [initTS 0] $ zipWith (\i ts -> [(i, t) | t <- ts]) [0..] $ [ [ HappyTok {-j-} t | (j,t) <- zip [0..] ts ] | ts <- tss ] ++ [[HappyEOF]] --- type PM a = ST Forest [Int] a type FStack = TStack ForestId --- -- main function doActions :: [FStack] -> Tokens -> PM (Either Tokens [FStack]) doActions ss [] -- no more tokens (this is ok) = return (Right ss) -- return the stacks (may be empty) doActions stks (tok:toks) = do stkss <- sequence [ do stks' <- reduceAll [] tok_form stks shiftAll tok_form stks' | tok_form <- tok ] let new_stks = merge $ concat stkss DEBUG_TRACE(unlines $ ("Stacks after R*/S pass" ++ show tok) : map show new_stks) case new_stks of -- did this token kill stacks? [] -> case toks of [] -> return $ Right [] -- ok if no more tokens _:_ -> return $ Left (tok:toks) -- not ok if some input left _ -> doActions new_stks toks reduceAll :: [GSymbol] -> (Int, GSymbol) -> [FStack] -> PM [(FStack, Int)] reduceAll _ tok [] = return [] reduceAll cyclic_names itok@(i,tok) (stk:stks) = do case action this_state tok of Accept -> reduceAll [] itok stks Error -> reduceAll [] itok stks Shift st rs -> do { ss <- redAll rs ; return $ (stk,st) : ss } Reduce rs -> redAll rs where this_state = top stk redAll rs = do let reds = [ (bf fids,stk',m) | (m,n,bf) <- rs , not (n == 0 && m `elem` cyclic_names) -- remove done ones , (fids,stk') <- pop n stk ] -- WARNING: incomplete if more than one Empty in a prod(!) -- WARNING: can avoid by splitting emps/non-emps DEBUG_TRACE(unlines $ ("Packing reds = " ++ show (length reds)) : map show reds) stks' <- foldM (pack i) stks reds let new_cyclic = [ m | (m,0,_) <- rs , UEQ(this_state, goto this_state m) , m `notElem` cyclic_names ] reduceAll (cyclic_names ++ new_cyclic) itok $ merge stks' shiftAll :: (Int, GSymbol) -> [(FStack, Int)] -> PM [FStack] shiftAll tok [] = return [] shiftAll (j,tok) stks = do let end = j + 1 let key = end `seq` (j,end,tok) newNode key let mss = [ (stk, st) | ss@((_,st):_) <- groupBy (\a b -> snd a == snd b) stks , stk <- merge $ map fst ss ] stks' <- sequence [ do { nid <- getID ; return (push key st nid stk) } | (stk,IBOX(st)) <- mss ] return stks' pack :: Int -> [FStack] -> (Branch, FStack, GSymbol) -> PM [FStack] pack e_i stks (fids,stk,m) | ULT(st, ILIT(0)) = return stks | otherwise = do let s_i = endpoint stk let key = (s_i,e_i,m) DEBUG_TRACE( unlines $ ("Pack at " ++ show key ++ " " ++ show fids) : ("**" ++ show stk) : map show stks) duplicate <- addBranch key fids let stack_matches = [ s | s <- stks , UEQ(top s, st) , let (k,s') = case ts_tail s of x:_ -> x , stk == s' , k == key ] -- look for first obvious packing site let appears_in = not $ null stack_matches DEBUG_TRACE( unlines $ ("Stack Matches: " ++ show (length stack_matches)) : map show stack_matches) DEBUG_TRACE( if not (duplicate && appears_in) then "" else unlines $ ("DROP:" ++ show (IBOX(st),key) ++ " -- " ++ show stk) : "*****" : map show stks) if duplicate && appears_in then return stks -- because already there else do nid <- getID case stack_matches of [] -> return $ insertStack (push key st nid stk) stks -- No prior stacks s:_ -> return $ insertStack (push key st nid stk) (delete s stks) -- pack into an existing stack where st = goto (top stk) m --- -- record an entry -- - expected: "i" will contain a token newNode :: ForestId -> PM () newNode i = chgS $ \f -> ((), Map.insert i [] f) --- -- add a new branch -- - due to packing, we check to see if a branch is already there -- - return True if the branch is already there addBranch :: ForestId -> Branch -> PM Bool addBranch i b = do f <- useS id case Map.lookup i f of Nothing -> chgS $ \f -> (False, Map.insert i [b] f) Just bs | b `elem` bs -> return True | otherwise -> chgS $ \f -> (True, Map.insert i (b:bs) f) --- -- only for use with nodes that exist getBranches :: ForestId -> PM [Branch] getBranches i = useS $ \s -> Map.findWithDefault no_such_node i s where no_such_node = error $ "No such node in Forest: " ++ show i ----------------------------------------------------------------------------- -- Auxiliary functions (<>) x y = (x,y) -- syntactic sugar -- Tomita stack -- - basic idea taken from Peter Ljungloef's Licentiate thesis data TStack a = TS { top :: FAST_INT -- state , ts_id :: FAST_INT -- ID , stoup :: !(Maybe a) -- temp holding place, for left rec. , ts_tail :: ![(a,TStack a)] -- [(element on arc , child)] } instance Show a => Show (TStack a) where show ts = "St" ++ show (IBOX(top ts)) #if defined(HAPPY_DEBUG) ++ "\n" ++ render (spp $ ts_tail ts) where spp ss = nest 2 $ vcat [ vcat [text (show (v,IBOX(top s))), spp (ts_tail s)] | (v,s) <- ss ] #endif --- -- id uniquely identifies a stack instance Eq (TStack a) where s1 == s2 = UEQ(ts_id s1, ts_id s2) --instance Ord (TStack a) where -- s1 `compare` s2 = IBOX(ts_id s1) `compare` IBOX(ts_id s2) --- -- Nothing special done for insertion -- - NB merging done at strategic points insertStack :: TStack a -> [TStack a] -> [TStack a] insertStack = (:) --- initTS :: Int -> TStack a initTS IBOX(id) = TS ILIT(0) id Nothing [] --- push :: ForestId -> FAST_INT -> Int -> TStack ForestId -> TStack ForestId push x@(s_i,e_i,m) st IBOX(id) stk = TS st id stoup [(x,stk)] where -- only fill stoup for cyclic states that don't consume input stoup | s_i == e_i && UEQ(st, goto st m) = Just x | otherwise = Nothing --- pop :: Int -> TStack a -> [([a],TStack a)] pop 0 ts = [([],ts)] pop 1 st@TS{stoup=Just x} = pop 1 st{stoup=Nothing} ++ [ ([x],st) ] pop n ts = [ (xs ++ [x] , stk') | (x,stk) <- ts_tail ts , (xs,stk') <- pop (n-1) stk ] --- popF :: TStack a -> TStack a popF ts = case ts_tail ts of (_,c):_ -> c --- endpoint stk = case ts_tail stk of [] -> 0 ((_,e_i,_),_):_ -> e_i --- merge :: (Eq a, Show a) => [TStack a] -> [TStack a] merge stks = [ TS st id ss (nub ch) | IBOX(st) <- nub (map (\s -> IBOX(top s)) stks) , let ch = concat [ x | TS st2 _ _ x <- stks, UEQ(st,st2) ] ss = mkss [ s | TS st2 _ s _ <- stks, UEQ(st,st2) ] (BANG IBOX(id)) = head [ IBOX(i) | TS st2 i _ _ <- stks, UEQ(st,st2) ] -- reuse of id is ok, since merge discards old stacks ] where mkss s = case nub [ x | Just x <- s ] of [] -> Nothing [x] -> Just x xs -> error $ unlines $ ("Stoup merge: " ++ show xs) : map show stks ---------------------------------------------------------------------------- -- Monad -- TODO (pcc): combine the s/i, or use the modern libraries - might be faster? -- but some other things are much, much, much more expensive! data ST s i a = MkST (s -> i -> (a,s,i)) instance Functor (ST s i) where fmap f (MkST sf) = MkST $ \s i -> case sf s i of (a,s',i') -> (f a,s',i') instance Applicative (ST s i) where pure a = MkST $ \s i -> (a,s,i) (<*>) = ap instance Monad (ST s i) where return = pure MkST sf >>= k = MkST $ \s i -> case sf s i of (a,s',i') -> let (MkST sf') = k a in sf' s' i' runST :: s -> i -> ST s i a -> (s,a) runST s i (MkST sf) = case sf s i of (a,s,_) -> (s,a) chgS :: (s -> (a,s)) -> ST s i a chgS sf = MkST $ \s i -> let (a,s') = sf s in (a,s',i) useS :: (s -> b) -> ST s i b useS fn = MkST $ \s i -> (fn s,s,i) getID :: ST s [Int] Int getID = MkST $ \s (i:is) -> (i,s,is) happy-lib-2.1.7/data/HappyTemplate.hs0000644000000000000000000005765307346545000015641 0ustar0000000000000000-- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ #if !defined(__GLASGOW_HASKELL__) # error This code isn't being built with GHC. #endif -- Get WORDS_BIGENDIAN (if defined) #include "MachDeps.h" -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Happy_Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Happy_Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Happy_Prelude.Bool) #define PLUS(n,m) (n Happy_GHC_Exts.+# m) #define MINUS(n,m) (n Happy_GHC_Exts.-# m) #define TIMES(n,m) (n Happy_GHC_Exts.*# m) #define NEGATE(n) (Happy_GHC_Exts.negateInt# (n)) type Happy_Int = Happy_GHC_Exts.Int# data Happy_IntList = HappyCons Happy_Int Happy_IntList #define INVALID_TOK -1# #define ERROR_TOK 0# #define CATCH_TOK 1# #if defined(HAPPY_COERCE) # define GET_ERROR_TOKEN(x) (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# i) -> i }) # define MK_ERROR_TOKEN(i) (Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# i)) # define MK_TOKEN(x) (happyInTok (x)) #else # define GET_ERROR_TOKEN(x) (case x of { HappyErrorToken (Happy_GHC_Exts.I# i) -> i }) # define MK_ERROR_TOKEN(i) (HappyErrorToken (Happy_GHC_Exts.I# i)) # define MK_TOKEN(x) (HappyTerminal (x)) #endif #if defined(HAPPY_DEBUG) # define DEBUG_TRACE(s) (happyTrace (s)) Happy_Prelude.$ happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO Happy_Prelude.$ do Happy_System_IO.hPutStr Happy_System_IO.stderr string Happy_Prelude.return expr #else # define DEBUG_TRACE(s) {- nothing -} #endif infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyDoParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept ERROR_TOK tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = DEBUG_TRACE("state: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ",\ttoken: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ ",\taction: ") case happyDecodeAction (happyNextAction i st) of HappyFail -> DEBUG_TRACE("failing.\n") happyFail i tk st HappyAccept -> DEBUG_TRACE("accept.\n") happyAccept i tk st HappyReduce rule -> DEBUG_TRACE("reduce (rule " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# rule) Happy_Prelude.++ ")") (happyReduceArr Happy_Data_Array.! (Happy_GHC_Exts.I# rule)) i tk st HappyShift new_state -> DEBUG_TRACE("shift, enter state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") happyShift new_state i tk st {-# INLINE happyNextAction #-} happyNextAction i st = case happyIndexActionTable i st of Happy_Prelude.Just (Happy_GHC_Exts.I# act) -> act Happy_Prelude.Nothing -> happyIndexOffAddr happyDefActions st {-# INLINE happyIndexActionTable #-} happyIndexActionTable i st | GTE(i, 0#), GTE(off, 0#), EQ(happyIndexOffAddr happyCheck off, i) -- i >= 0: Guard against INVALID_TOK (do the default action, which ultimately errors) -- off >= 0: Otherwise it's a default action -- equality check: Ensure that the entry in the compressed array is owned by st = Happy_Prelude.Just (Happy_GHC_Exts.I# (happyIndexOffAddr happyTable off)) | Happy_Prelude.otherwise = Happy_Prelude.Nothing where off = PLUS(happyIndexOffAddr happyActOffsets st, i) data HappyAction = HappyFail | HappyAccept | HappyReduce Happy_Int -- rule number | HappyShift Happy_Int -- new state deriving Happy_Prelude.Show {-# INLINE happyDecodeAction #-} happyDecodeAction :: Happy_Int -> HappyAction happyDecodeAction 0# = HappyFail happyDecodeAction -1# = HappyAccept happyDecodeAction action | LT(action, 0#) = HappyReduce NEGATE(PLUS(action, 1#)) | Happy_Prelude.otherwise = HappyShift MINUS(action, 1#) {-# INLINE happyIndexGotoTable #-} happyIndexGotoTable nt st = happyIndexOffAddr happyTable off where off = PLUS(happyIndexOffAddr happyGotoOffsets st, nt) {-# INLINE happyIndexOffAddr #-} happyIndexOffAddr :: HappyAddr -> Happy_Int -> Happy_Int happyIndexOffAddr (HappyA# arr) off = #if __GLASGOW_HASKELL__ >= 901 Happy_GHC_Exts.int32ToInt# -- qualified import because it doesn't exist on older GHC's #endif #ifdef WORDS_BIGENDIAN -- The CI of `alex` tests this code path (Happy_GHC_Exts.word32ToInt32# (Happy_GHC_Exts.wordToWord32# (Happy_GHC_Exts.byteSwap32# (Happy_GHC_Exts.word32ToWord# (Happy_GHC_Exts.int32ToWord32# #endif (Happy_GHC_Exts.indexInt32OffAddr# arr off) #ifdef WORDS_BIGENDIAN ))))) #endif happyIndexRuleArr :: Happy_Int -> (# Happy_Int, Happy_Int #) happyIndexRuleArr r = (# nt, len #) where !(Happy_GHC_Exts.I# n_starts) = happy_n_starts offs = TIMES(MINUS(r,n_starts),2#) nt = happyIndexOffAddr happyRuleArr offs len = happyIndexOffAddr happyRuleArr PLUS(offs,1#) data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state ERROR_TOK tk st sts stk@(x `HappyStk` _) = -- See "Error Fixup" below let i = GET_ERROR_TOKEN(x) in DEBUG_TRACE("shifting the error token") happyDoAction i tk new_state (HappyCons st sts) stk happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons st sts) (MK_TOKEN(tk) `HappyStk` stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 nt fn j tk st sts stk = happySeq fn (happyGoto nt j tk st (HappyCons st sts) (fn `HappyStk` stk)) happySpecReduce_1 nt fn j tk old_st sts@(HappyCons st _) (v1 `HappyStk` stk') = let r = fn v1 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happySpecReduce_2 nt fn j tk old_st (HappyCons _ sts@(HappyCons st _)) (v1 `HappyStk` v2 `HappyStk` stk') = let r = fn v1 v2 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happySpecReduce_3 nt fn j tk old_st (HappyCons _ (HappyCons _ sts@(HappyCons st _))) (v1 `HappyStk` v2 `HappyStk` v3 `HappyStk` stk') = let r = fn v1 v2 v3 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happyReduce k nt fn j tk st sts stk = case happyDrop MINUS(k,(1# :: Happy_Int)) sts of sts1@(HappyCons st1 _) -> let r = fn stk in -- it doesn't hurt to always seq here... st `happyTcHack` happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons st sts) of sts1@(HappyCons st1 _) -> let drop_stk = happyDropStk k stk in j `happyTcHack` happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons st sts) of sts1@(HappyCons st1 _) -> let drop_stk = happyDropStk k stk off = happyIndexOffAddr happyGotoOffsets st1 off_i = PLUS(off, nt) new_state = happyIndexOffAddr happyTable off_i in j `happyTcHack` happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons _ t) = happyDrop MINUS(n,(1# :: Happy_Int)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk MINUS(n,(1#::Happy_Int)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = DEBUG_TRACE(", goto state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") happyDoAction j tk new_state where new_state = happyIndexGotoTable nt st {- Note [Error recovery] ~~~~~~~~~~~~~~~~~~~~~~~~ When there is no applicable action for the current lookahead token `tk`, happy enters error recovery mode. Depending on whether the grammar file declares the two action form `%error { abort } { report }` for Resumptive Error Handling, it works in one (not resumptive) or two phases (resumptive): 1. Fixup mode: Try to see if there is an action for the error token ERROR_TOK. If there is, do *not* emit an error and pretend instead that an `error` token was inserted. When there is no ERROR_TOK action, report an error. In non-resumptive error handling, calling the single error handler (e.g. `happyError`) will throw an exception and abort the parser. However, in resumptive error handling we enter *error resumption mode*. 2. Error resumption mode: After reporting the error (with `report`), happy will attempt to find a good state stack to resume parsing in. For each candidate stack, it discards input until one of the candidates resumes (i.e. shifts the current input). If no candidate resumes before the end of input, resumption failed and calls the `abort` function, to much the same effect as in non-resumptive error handling. Candidate stacks are declared by the grammar author using the special `catch` terminal and called "catch frames". This mechanism is described in detail in Note [happyResume]. The `catch` resumption mechanism (2) is what usually is associated with `error` in `bison` or `menhir`. Since `error` is used for the Fixup mechanism (1) above, we call the corresponding token `catch`. Furthermore, in constrast to `bison`, our implementation of `catch` non-deterministically considers multiple catch frames on the stack for resumption (See Note [Multiple catch frames]). Note [happyResume] ~~~~~~~~~~~~~~~~~~ `happyResume` implements the resumption mechanism from Note [Error recovery]. It is best understood by example. Consider Exp :: { String } Exp : '1' { "1" } | catch { "catch" } | Exp '+' Exp %shift { $1 Happy_Prelude.++ " + " Happy_Prelude.++ $3 } -- %shift: associate 1 + 1 + 1 to the right | '(' Exp ')' { "(" Happy_Prelude.++ $2 Happy_Prelude.++ ")" } The idea of the use of `catch` here is that upon encountering a parse error during expression parsing, we can gracefully degrade using the `catch` rule, still producing a partial syntax tree and keep on parsing to find further syntax errors. Let's trace the parser state for input 11+1, which will error out after shifting 1. After shifting, we have the following item stack (growing downwards and omitting transitive closure items): State 0: %start_parseExp -> . Exp State 5: Exp -> '1' . (Stack as a list of state numbers: [5,0].) As Note [Error recovery] describes, we will first try Fixup mode. That fails because no production can shift the `error` token. Next we try Error resumption mode. This works as follows: 1. Pop off the item stack until we find an item that can shift the `catch` token. (Implemented in `pop_items`.) * State 5 cannot shift catch. Pop. * State 0 can shift catch, which would transition into State 4: Exp -> catch . So record the *stack* `[4,0]` after doing the shift transition. We call this a *catch frame*, where the top is a *catch state*, corresponding to an item in which we just shifted a `catch` token. There can be multiple such catch stacks, see Note [Multiple catch frames]. 2. Discard tokens from the input until the lookahead can be shifted in one of the catch stacks. (Implemented in `discard_input_until_exp` and `some_catch_state_shifts`.) * We cannot shift the current lookahead '1' in state 4, so we discard * We *can* shift the next lookahead '+' in state 4, but only after reducing, which pops State 4 and goes to State 3: State 3: %start_parseExp -> Exp . Exp -> Exp . '+' Exp Here we can shift '+'. As you can see, to implement this machinery we need to simulate the operation of the LALR automaton, especially reduction (`happySimulateReduce`). Note [Multiple catch frames] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For fewer spurious error messages, it can be beneficial to trace multiple catch items. Consider Exp : '1' | catch | Exp '+' Exp %shift | '(' Exp ')' Let's trace the parser state for input (;+1, which will error out after shifting (. After shifting, we have the following item stack (growing downwards): State 0: %start_parseExp -> . Exp State 6: Exp -> '(' . Exp ')' Upon error, we want to find items in the stack which can shift a catch token. Note that both State 0 and State 6 can shift a catch token, transitioning into State 4: Exp -> catch . Hence we record the catch frames `[4,6,0]` and `[4,0]` for possible resumption. Which catch frame do we pick for resumption? Note that resuming catch frame `[4,0]` will parse as "catch+1", whereas resuming the innermost frame `[4,6,0]` corresponds to parsing "(catch+1". The latter would keep discarding input until the closing ')' is found. So we will discard + and 1, leading to a spurious syntax error at the end of input, aborting the parse and never producing a partial syntax tree. Bad! It is far preferable to resume with catch frame `[4,0]`, where we can resume successfully on input +, so that is what we do. In general, we pick the catch frame for resumption that discards the least amount of input for a successful shift, preferring the topmost such catch frame. -} -- happyFail :: Happy_Int -> Token -> Happy_Int -> _ -- This function triggers Note [Error recovery]. -- If the current token is ERROR_TOK, phase (1) has failed and we might try -- phase (2). happyFail ERROR_TOK = happyFixupFailed happyFail i = happyTryFixup i -- Enter Error Fixup (see Note [Error recovery]): -- generate an error token, save the old token and carry on. -- When a `happyShift` accepts the error token, we will pop off the error token -- to resume parsing with the current lookahead `i`. happyTryFixup i tk action sts stk = DEBUG_TRACE("entering `error` fixup.\n") happyDoAction ERROR_TOK tk action sts (MK_ERROR_TOKEN(i) `HappyStk` stk) -- NB: `happyShift` will simply pop the error token and carry on with -- `tk`. Hence we don't change `tk` in the call here -- See Note [Error recovery], phase (2). -- Enter resumption mode after reporting the error by calling `happyResume`. happyFixupFailed tk st sts (x `HappyStk` stk) = let i = GET_ERROR_TOKEN(x) in DEBUG_TRACE("`error` fixup failed.\n") let resume = happyResume i tk st sts stk expected = happyExpectedTokens st sts in happyReport i tk expected resume -- happyResume :: Happy_Int -> Token -> Happy_Int -> _ -- See Note [happyResume] happyResume i tk st sts stk = pop_items [] st sts stk where !(Happy_GHC_Exts.I# n_starts) = happy_n_starts -- this is to test whether we have a start token !(Happy_GHC_Exts.I# eof_i) = happy_n_terms Happy_Prelude.- 1 -- this is the token number of the EOF token happy_list_to_list :: Happy_IntList -> [Happy_Prelude.Int] happy_list_to_list (HappyCons st sts) | LT(st, n_starts) = [(Happy_GHC_Exts.I# st)] | Happy_Prelude.otherwise = (Happy_GHC_Exts.I# st) : happy_list_to_list sts -- See (1) of Note [happyResume] pop_items catch_frames st sts stk | LT(st, n_starts) = DEBUG_TRACE("reached start state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", ") if Happy_Prelude.null catch_frames_new then DEBUG_TRACE("no resumption.\n") happyAbort else DEBUG_TRACE("now discard input, trying to anchor in states " Happy_Prelude.++ Happy_Prelude.show (Happy_Prelude.map (happy_list_to_list . Happy_Prelude.fst) (Happy_Prelude.reverse catch_frames_new)) Happy_Prelude.++ ".\n") discard_input_until_exp i tk (Happy_Prelude.reverse catch_frames_new) | (HappyCons st1 sts1) <- sts, _ `HappyStk` stk1 <- stk = pop_items catch_frames_new st1 sts1 stk1 where !catch_frames_new | HappyShift new_state <- happyDecodeAction (happyNextAction CATCH_TOK st) , DEBUG_TRACE("can shift catch token in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", into state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") Happy_Prelude.null (Happy_Prelude.filter (\(HappyCons _ (HappyCons h _),_) -> EQ(st,h)) catch_frames) = (HappyCons new_state (HappyCons st sts), MK_ERROR_TOKEN(i) `HappyStk` stk):catch_frames -- MK_ERROR_TOKEN(i) is just some dummy that should not be accessed by user code | Happy_Prelude.otherwise = DEBUG_TRACE("already shifted or can't shift catch in " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ "\n") catch_frames -- See (2) of Note [happyResume] discard_input_until_exp i tk catch_frames | Happy_Prelude.Just (HappyCons st (HappyCons catch_st sts), catch_frame) <- some_catch_state_shifts i catch_frames = DEBUG_TRACE("found expected token in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ " after shifting from " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# catch_st) Happy_Prelude.++ ": " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ "\n") happyDoAction i tk st (HappyCons catch_st sts) catch_frame | EQ(i,eof_i) -- is i EOF? = DEBUG_TRACE("reached EOF, cannot resume. abort parse :(\n") happyAbort | Happy_Prelude.otherwise = DEBUG_TRACE("discard token " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ "\n") happyLex (\eof_tk -> discard_input_until_exp eof_i eof_tk catch_frames) -- eof (\i tk -> discard_input_until_exp i tk catch_frames) -- not eof some_catch_state_shifts _ [] = DEBUG_TRACE("no catch state could shift.\n") Happy_Prelude.Nothing some_catch_state_shifts i catch_frames@(((HappyCons st sts),_):_) = try_head i st sts catch_frames where try_head i st sts catch_frames = -- PRECONDITION: head catch_frames = (HappyCons st sts) DEBUG_TRACE("trying token " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ " in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ": ") case happyDecodeAction (happyNextAction i st) of HappyFail -> DEBUG_TRACE("fail.\n") some_catch_state_shifts i (Happy_Prelude.tail catch_frames) HappyAccept -> DEBUG_TRACE("accept.\n") Happy_Prelude.Just (Happy_Prelude.head catch_frames) HappyShift _ -> DEBUG_TRACE("shift.\n") Happy_Prelude.Just (Happy_Prelude.head catch_frames) HappyReduce r -> case happySimulateReduce r st sts of (HappyCons st1 sts1) -> try_head i st1 sts1 catch_frames happySimulateReduce r st sts = DEBUG_TRACE("simulate reduction of rule " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# r) Happy_Prelude.++ ", ") let (# nt, len #) = happyIndexRuleArr r in DEBUG_TRACE("nt " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# nt) Happy_Prelude.++ ", len: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# len) Happy_Prelude.++ ", new_st ") let !(sts1@(HappyCons st1 _)) = happyDrop len (HappyCons st sts) new_st = happyIndexGotoTable nt st1 in DEBUG_TRACE(Happy_Prelude.show (Happy_GHC_Exts.I# new_st) Happy_Prelude.++ ".\n") (HappyCons new_st sts1) happyTokenToString :: Happy_Prelude.Int -> Happy_Prelude.String happyTokenToString i = happyTokenStrings Happy_Prelude.!! (i Happy_Prelude.- 2) -- 2: errorTok, catchTok happyExpectedTokens :: Happy_Int -> Happy_IntList -> [Happy_Prelude.String] -- Upon a parse error, we want to suggest tokens that are expected in that -- situation. This function computes such tokens. -- It works by examining the top of the state stack. -- For every token number that does a shift transition, record that token number. -- For every token number that does a reduce transition, simulate that reduction -- on the state state stack and repeat. -- The recorded token numbers are then formatted with 'happyTokenToString' and -- returned. happyExpectedTokens st sts = DEBUG_TRACE("constructing expected tokens.\n") Happy_Prelude.map happyTokenToString (search_shifts st sts []) where search_shifts st sts shifts = Happy_Prelude.foldr (add_action st sts) shifts (distinct_actions st) add_action st sts (Happy_GHC_Exts.I# i, Happy_GHC_Exts.I# act) shifts = DEBUG_TRACE("found action in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", input " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ ", " Happy_Prelude.++ Happy_Prelude.show (happyDecodeAction act) Happy_Prelude.++ "\n") case happyDecodeAction act of HappyFail -> shifts HappyAccept -> shifts -- This would always be %eof or error... Not helpful HappyShift _ -> Happy_Prelude.insert (Happy_GHC_Exts.I# i) shifts HappyReduce r -> case happySimulateReduce r st sts of (HappyCons st1 sts1) -> search_shifts st1 sts1 shifts distinct_actions st -- The (token number, action) pairs of all actions in the given state = ((-1), (Happy_GHC_Exts.I# (happyIndexOffAddr happyDefActions st))) : [ (i, act) | i <- [begin_i..happy_n_terms], act <- get_act row_off i ] where row_off = happyIndexOffAddr happyActOffsets st begin_i = 2 -- +2: errorTok,catchTok get_act off (Happy_GHC_Exts.I# i) -- happyIndexActionTable with cached row offset | let off_i = PLUS(off,i) , GTE(off_i,0#) , EQ(happyIndexOffAddr happyCheck off_i,i) = [(Happy_GHC_Exts.I# (happyIndexOffAddr happyTable off_i))] | Happy_Prelude.otherwise = [] -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Happy_Prelude.error "Internal Happy parser panic. This is not supposed to happen! Please open a bug report at https://github.com/haskell/happy/issues.\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_Int -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Happy_GHC_Exts.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-lib-2.1.7/frontend/boot-src/0000755000000000000000000000000007346545000015146 5ustar0000000000000000happy-lib-2.1.7/frontend/boot-src/AttrGrammarParser.ly0000644000000000000000000000511007346545000021107 0ustar0000000000000000This parser parses the contents of the attribute grammar into a list of rules. A rule can either be an assignment to an attribute of the LHS (synthesized attribute), and assignment to an attribute of the RHS (an inherited attribute), or a conditional statement. > { > {-# OPTIONS_GHC -w #-} > module Happy.Frontend.AttrGrammar.Parser (agParser) where > import Happy.Frontend.ParseMonad.Class > import Happy.Frontend.ParseMonad > import Happy.Frontend.AttrGrammar > } > %name agParser > %tokentype { AgToken } > %token > "{" { AgTok_LBrace } > "}" { AgTok_RBrace } > ";" { AgTok_Semicolon } > "=" { AgTok_Eq } > where { AgTok_Where } > selfRef { AgTok_SelfRef _ } > subRef { AgTok_SubRef _ } > rightRef { AgTok_RightmostRef _ } > unknown { AgTok_Unknown _ } > > %monad { P } > %lexer { lexTokenP } { AgTok_EOF } > %% > agParser :: { [AgRule] } > : rules { $1 } > rules :: { [AgRule] } > : rule ";" rules { $1 : $3 } > | rule { $1 : [] } > | { [] } > rule :: { AgRule } > : selfRef "=" code { SelfAssign $ MkAgSelfAssign (selfRefVal $1) $3 } > | subRef "=" code { SubAssign $ MkAgSubAssign (subRefVal $1) $3 } > | rightRef "=" code { RightmostAssign (rightRefVal $1) $3 } > | where code { Conditional $ MkAgConditional $2 } > code :: { [AgToken] } > : "{" code0 "}" code { [$1] ++ $2 ++ [$3] ++ $4 } > | "=" code { $1 : $2 } > | selfRef code { $1 : $2 } > | subRef code { $1 : $2 } > | rightRef code { $1 : $2 } > | unknown code { $1 : $2 } > | { [] } > code0 :: { [AgToken] } > : "{" code0 "}" code0 { [$1] ++ $2 ++ [$3] ++ $4 } > | "=" code0 { $1 : $2 } > | ";" code0 { $1 : $2 } > | selfRef code0 { $1 : $2 } > | subRef code0 { $1 : $2 } > | rightRef code { $1 : $2 } > | unknown code0 { $1 : $2 } > | { [] } > { > happyError :: P a > happyError = failP (\l -> show l ++ ": Parse error\n") > } happy-lib-2.1.7/frontend/boot-src/Parser.ly0000644000000000000000000001457107346545000016760 0ustar0000000000000000----------------------------------------------------------------------------- $Id: Parser.ly,v 1.15 2005/01/26 01:10:42 ross Exp $ The parser. (c) 1993-2000 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > { > {-# OPTIONS_GHC -w #-} > module Happy.Frontend.Parser (ourParser) where > import Happy.Frontend.ParseMonad.Class > import Happy.Frontend.ParseMonad > import Happy.Frontend.AbsSyn > import Happy.Frontend.Lexer > } > %name ourParser > %tokentype { Token } > %token > id { TokenInfo $$ TokId } > spec_tokentype { TokenKW TokSpecId_TokenType } > spec_token { TokenKW TokSpecId_Token } > spec_name { TokenKW TokSpecId_Name } > spec_partial { TokenKW TokSpecId_Partial } > spec_lexer { TokenKW TokSpecId_Lexer } > spec_imported_identity { TokenKW TokSpecId_ImportedIdentity } > spec_monad { TokenKW TokSpecId_Monad } > spec_nonassoc { TokenKW TokSpecId_Nonassoc } > spec_left { TokenKW TokSpecId_Left } > spec_right { TokenKW TokSpecId_Right } > spec_prec { TokenKW TokSpecId_Prec } > spec_shift { TokenKW TokSpecId_Shift } > spec_expect { TokenKW TokSpecId_Expect } > spec_error { TokenKW TokSpecId_Error } > spec_errorexpected { TokenKW TokSpecId_ErrorExpected } > spec_errorhandlertype { TokenKW TokSpecId_ErrorHandlerType } > spec_attribute { TokenKW TokSpecId_Attribute } > spec_attributetype { TokenKW TokSpecId_Attributetype } > pragma { TokenInfo $$ TokPragmaQuote } > code { TokenInfo $$ TokCodeQuote } > int { TokenNum $$ TokNum } > ":" { TokenKW TokColon } > ";" { TokenKW TokSemiColon } > "::" { TokenKW TokDoubleColon } > "%%" { TokenKW TokDoublePercent } > "|" { TokenKW TokBar } > "(" { TokenKW TokParenL } > ")" { TokenKW TokParenR } > "," { TokenKW TokComma } > %monad { P } > %lexer { lexTokenP } { TokenEOF } > %% > parser :: { BookendedAbsSyn } > : optPragma optCode core_parser optCode { BookendedAbsSyn $1 $2 $3 $4 } > core_parser :: { AbsSyn String } > : tokInfos "%%" rules { AbsSyn (reverse $1) (reverse $3) } > rules :: { [Rule String] } > : rules rule { $2 : $1 } > | rule { [$1] } > rule :: { Rule String } > : id params "::" code ":" prods { Rule $1 $2 $6 (Just $4) } > | id params "::" code id ":" prods { Rule $1 $2 $7 (Just $4) } > | id params ":" prods { Rule $1 $2 $4 Nothing } > params :: { [String] } > : "(" comma_ids ")" { reverse $2 } > | {- empty -} { [] } > comma_ids :: { [String] } > : id { [$1] } > | comma_ids "," id { $3 : $1 } > prods :: { [Prod String] } > : prod "|" prods { $1 : $3 } > | prod { [$1] } > prod :: { Prod String } > : terms prec code ";" {% lineP >>= \l -> return (Prod $1 $3 l $2) } > | terms prec code {% lineP >>= \l -> return (Prod $1 $3 l $2) } > term :: { Term } > : id { App $1 [] } > | id "(" comma_terms ")" { App $1 (reverse $3) } > terms :: { [Term] } > : terms_rev { reverse $1 } > | { [] } > terms_rev :: { [Term] } > : term { [$1] } > | terms_rev term { $2 : $1 } > comma_terms :: { [Term] } > : term { [$1] } > | comma_terms "," term { $3 : $1 } > prec :: { Prec } > : spec_prec id { PrecId $2 } > | spec_shift { PrecShift } > | { PrecNone } > tokInfos :: { [Directive String] } > : tokInfos tokInfo { $2 : $1 } > | tokInfo { [$1] } > tokInfo :: { Directive String } > : spec_tokentype code { TokenType $2 } > | spec_token tokenSpecs { TokenSpec $2 } > | spec_name id optStart { TokenName $2 $3 False } > | spec_partial id optStart { TokenName $2 $3 True } > | spec_imported_identity { TokenImportedIdentity } > | spec_lexer code code { TokenLexer $2 $3 } > | spec_monad code { TokenMonad "()" $2 "Happy_Prelude.>>=" "Happy_Prelude.return" } > | spec_monad code code { TokenMonad $2 $3 "Happy_Prelude.>>=" "Happy_Prelude.return" } > | spec_monad code code code { TokenMonad "()" $2 $3 $4 } > | spec_monad code code code code { TokenMonad $2 $3 $4 $5 } > | spec_nonassoc ids { TokenNonassoc $2 } > | spec_right ids { TokenRight $2 } > | spec_left ids { TokenLeft $2 } > | spec_expect int { TokenExpect $2 } > | spec_error code optCode { TokenError $2 $3 } > | spec_errorexpected { TokenErrorExpected } > | spec_errorhandlertype id { TokenErrorHandlerType $2 } > | spec_attributetype code { TokenAttributetype $2 } > | spec_attribute id code { TokenAttribute $2 $3 } > optStart :: { Maybe String } > : id { Just $1 } > | {- nothing -} { Nothing } > tokenSpecs :: { [(String, TokenSpec)] } > : tokenSpec tokenSpecs { $1:$2 } > | tokenSpec { [$1] } > tokenSpec :: { (String, TokenSpec) } > : id code { ($1, parseTokenSpec $2) } > ids :: { [String] } > : id ids { $1 : $2 } > | {- nothing -} { [] } > optCode :: { Maybe String } > : code { Just $1 } > | {- nothing -} { Nothing } > optPragma :: { Maybe String } > : pragma { Just $1 } > | {- nothing -} { Nothing } > { > happyError :: P a > happyError = failP (\l -> show l ++ ": Parse error\n") > } happy-lib-2.1.7/frontend/0000755000000000000000000000000007346545000013416 5ustar0000000000000000happy-lib-2.1.7/frontend/bootstrap.sh0000644000000000000000000000035207346545000015767 0ustar0000000000000000#!/usr/bin/env sh BASEDIR=$(dirname "$0") happy -agc "$BASEDIR/boot-src/Parser.ly" -o "$BASEDIR/src/Happy/Frontend/Parser.hs" happy -agc "$BASEDIR/boot-src/AttrGrammarParser.ly" -o "$BASEDIR/src/Happy/Frontend/AttrGrammar/Parser.hs" happy-lib-2.1.7/frontend/src/Happy/0000755000000000000000000000000007346545000015266 5ustar0000000000000000happy-lib-2.1.7/frontend/src/Happy/Frontend.hs0000644000000000000000000000203407346545000017400 0ustar0000000000000000module Happy.Frontend where import Happy.Frontend.AbsSyn import Happy.Frontend.Parser import Happy.Frontend.ParseMonad.Class parseYFileContents :: String -> ParseResult BookendedAbsSyn parseYFileContents contents = runFromStartP ourParser contents 1 data FileType = Y | LY fileNameAndType :: String -> Maybe (String, FileType) fileNameAndType = nameType . reverse where nameType ('y':'.':f) = Just (reverse f, Y) nameType ('y':'l':'.':f) = Just (reverse f, LY) nameType _ = Nothing -- Delit, converting an ly file into a y file. deLitify :: String -> String deLitify = deLit where deLit ('>':' ':r) = deLit1 r deLit ('>':'\t':r) = '\t' : deLit1 r deLit ('>':'\n':r) = deLit r deLit ('>':_) = error "Error when de-litify-ing" deLit ('\n':r) = '\n' : deLit r deLit r = deLit2 r deLit1 ('\n':r) = '\n' : deLit r deLit1 (c:r) = c : deLit1 r deLit1 [] = [] deLit2 ('\n':r) = '\n' : deLit r deLit2 (_:r) = deLit2 r deLit2 [] = [] happy-lib-2.1.7/frontend/src/Happy/Frontend/0000755000000000000000000000000007346545000017045 5ustar0000000000000000happy-lib-2.1.7/frontend/src/Happy/Frontend/AbsSyn.lhs0000644000000000000000000001736607346545000020771 0ustar0000000000000000----------------------------------------------------------------------------- Abstract syntax for grammar files. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- Here is the abstract syntax of the language we parse. > module Happy.Frontend.AbsSyn ( > BookendedAbsSyn(..), > AbsSyn(..), Directive(..), > getTokenType, getTokenSpec, getParserNames, getLexer, > getImportedIdentity, getMonad, ErrorHandlerInfo(..), getError, > getPrios, getPrioNames, getExpect, getErrorExpectedMode, > getAttributes, getAttributetype, getAttributeGrammarExtras, > parseTokenSpec, > Rule(..), Prod(..), Term(..), Prec(..), > TokenSpec(..) -- reexport > ) where > import Data.Char (isAlphaNum) > import Happy.Grammar > import Happy.Grammar.ExpressionWithHole > data BookendedAbsSyn > = BookendedAbsSyn > (Maybe String) -- options > (Maybe String) -- header > (AbsSyn String) > (Maybe String) -- footer > data AbsSyn e > = AbsSyn > [Directive String] -- directives > [Rule e] -- productions > data Rule e > = Rule > String -- name of the rule > [String] -- parameters (see parametrized productions) > [Prod e] -- productions > (Maybe String) -- type of the rule > data Prod e > = Prod > [Term] -- terms that make up the rule > e -- code body that runs when the rule reduces > Int -- line number > Prec -- inline precedence annotation for the rule > data Term > = App > String -- name of the term > [Term] -- parameter arguments (usually this is empty) > data Prec > = PrecNone -- no user-specified precedence > | PrecShift -- %shift > | PrecId String -- %prec ID > deriving Show %----------------------------------------------------------------------------- Parser Generator Directives. ToDo: find a consistent way to analyse all the directives together and generate some error messages. > data Directive a > = TokenType String -- %tokentype > | TokenSpec [(a, TokenSpec)] -- %token > | TokenName String (Maybe String) Bool -- %name/%partial (True <=> %partial) > | TokenLexer String String -- %lexer > | TokenImportedIdentity -- %importedidentity > | TokenMonad String String String String -- %monad > | TokenNonassoc [String] -- %nonassoc > | TokenRight [String] -- %right > | TokenLeft [String] -- %left > | TokenExpect Int -- %expect > | TokenError String (Maybe String) -- %error > | TokenErrorExpected -- %error.expected > | TokenErrorHandlerType String -- %errorhandlertype > | TokenAttributetype String -- %attributetype > | TokenAttribute String String -- %attribute > deriving (Eq, Show) > getTokenType :: [Directive t] -> String > getTokenType ds > = case [ t | (TokenType t) <- ds ] of > [t] -> t > [] -> error "no token type given" > _ -> error "multiple token types" > getParserNames :: [Directive t] -> [Directive t] > getParserNames ds = [ t | t@(TokenName _ _ _) <- ds ] > getLexer :: [Directive t] -> Maybe (String, String) > getLexer ds > = case [ (a,b) | (TokenLexer a b) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple lexer directives" > getImportedIdentity :: [Directive t] -> Bool > getImportedIdentity ds > = case [ (()) | TokenImportedIdentity <- ds ] of > [_] -> True > [] -> False > _ -> error "multiple importedidentity directives" > getMonad :: [Directive t] -> (Bool, String, String, String, String) > getMonad ds > = case [ (True,a,b,c,d) | (TokenMonad a b c d) <- ds ] of > [t] -> t > [] -> (False,"()","HappyIdentity","Happy_Prelude.>>=","Happy_Prelude.return") > _ -> error "multiple monad directives" > getTokenSpec :: [Directive t] -> [(t, TokenSpec)] > getTokenSpec ds = concat [ t | (TokenSpec t) <- ds ] > getPrios :: [Directive t] -> [Directive t] > getPrios ds = [ d | d <- ds, > case d of > TokenNonassoc _ -> True > TokenLeft _ -> True > TokenRight _ -> True > _ -> False > ] > getPrioNames :: Directive t -> [String] > getPrioNames (TokenNonassoc s) = s > getPrioNames (TokenLeft s) = s > getPrioNames (TokenRight s) = s > getPrioNames _ = error "Not an associativity token" > getExpect :: [Directive t] -> Maybe Int > getExpect ds > = case [ n | (TokenExpect n) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple expect directives" > getError :: [Directive t] -> ErrorHandlerInfo > getError ds > = case [ (a, mb_b) | (TokenError a mb_b) <- ds ] of > [] -> DefaultErrorHandler > [(a,Nothing)] -> CustomErrorHandler a > [(abort,Just addMessage)] -> ResumptiveErrorHandler abort addMessage > _ -> error "multiple error directives" > getErrorExpectedMode :: Eq t => [Directive t] -> ErrorExpectedMode > getErrorExpectedMode ds > | ["explist"] <- old_directive > = OldExpected > | TokenErrorExpected `elem` ds > = NewExpected > | length old_directive > 1 > = error "multiple errorhandlertype directives" > | otherwise > = NoExpected > where > old_directive = [ a | (TokenErrorHandlerType a) <- ds ] > getAttributes :: [Directive t] -> [(String, String)] > getAttributes ds > = [ (ident,typ) | (TokenAttribute ident typ) <- ds ] > getAttributetype :: [Directive t] -> Maybe String > getAttributetype ds > = case [ t | (TokenAttributetype t) <- ds ] of > [t] -> Just t > [] -> Nothing > _ -> error "multiple attributetype directives" > getAttributeGrammarExtras :: [Directive t] -> Maybe AttributeGrammarExtras > getAttributeGrammarExtras ds = case (getAttributes ds, getAttributetype ds) of > ([], Nothing) -> Nothing > (as, Just at) -> Just $ AttributeGrammarExtras { > attributes = as, > attributetype = at > } > (_ : _, Nothing) -> error "attributes found without attribute type directive" > -- | Parse a token spec. > -- > -- The first occurence of '$$' indicates an expression in which the '$$' > -- will be substituted for the actual lexed token. '$$' in string or char > -- literals ('".."' and '\'.\'') however does not count. > parseTokenSpec :: String -> TokenSpec > parseTokenSpec code0 = go code0 "" > where go code acc = > case code of > [] -> TokenFixed code0 > '"' :r -> case reads code :: [(String,String)] of > [] -> go r ('"':acc) > (s,r'):_ -> go r' (reverse (show s) ++ acc) > a:'\'' :r | isAlphaNum a -> go r ('\'':a:acc) > '\'' :r -> case reads code :: [(Char,String)] of > [] -> go r ('\'':acc) > (c,r'):_ -> go r' (reverse (show c) ++ acc) > '\\':'$':r -> go r ('$':acc) > '$':'$':r -> TokenWithValue $ ExpressionWithHole (reverse acc) r > c:r -> go r (c:acc) happy-lib-2.1.7/frontend/src/Happy/Frontend/AttrGrammar.lhs0000644000000000000000000001041107346545000021773 0ustar0000000000000000> module Happy.Frontend.AttrGrammar > ( AgToken (..) > , AgRule (..) > , AgSelfAssign(..) > , AgSubAssign(..) > , AgConditional(..) > , HasLexer (..) > , Index > , agLexAll > , subRefVal > , selfRefVal > , rightRefVal > ) where > import Data.Char > import Happy.Frontend.ParseMonad.Class > type Index = Int > data AgToken > = AgTok_LBrace > | AgTok_RBrace > | AgTok_Where > | AgTok_Semicolon > | AgTok_Eq > | AgTok_SelfRef String > | AgTok_SubRef (Index, String) > | AgTok_RightmostRef String > | AgTok_Unknown String > | AgTok_EOF > deriving (Show,Eq,Ord) > subRefVal :: AgToken -> (Index, String) > subRefVal (AgTok_SubRef x) = x > subRefVal _ = error "subRefVal: Bad value" > selfRefVal :: AgToken -> String > selfRefVal (AgTok_SelfRef x) = x > selfRefVal _ = error "selfRefVal: Bad value" > rightRefVal :: AgToken -> String > rightRefVal (AgTok_RightmostRef x) = x > rightRefVal _ = error "rightRefVal: Bad value" > data AgRule > = SelfAssign AgSelfAssign > | SubAssign AgSubAssign > | RightmostAssign String [AgToken] > -- ^ Syntactic sugar > | Conditional AgConditional > deriving (Show,Eq,Ord) We will partition the rule types and handle them separately, so we want a separate data type for each core rule type. We don't need one for `RightmostAssign` because it is syntactic sugar. > data AgSelfAssign = MkAgSelfAssign String [AgToken] > deriving (Show,Eq,Ord) > data AgSubAssign = MkAgSubAssign (Index, String) [AgToken] > deriving (Show,Eq,Ord) > data AgConditional = MkAgConditional [AgToken] > deriving (Show,Eq,Ord) ----------------------------------------------------------------- -- For the most part, the body of the attribute grammar rules -- is uninterpreted Haskell expressions. We only need to know about -- a) braces and semicolons to break the rules apart -- b) the equals sign to break the rules into LValues and the RHS -- c) attribute references, which are $$, $x (positive integer x) -- or $> (for the rightmost symbol) followed by an optional -- attribute specifier, which is a dot followed by a -- Haskell variable identifier -- Examples: -- $$ -- $1 -- $> -- $$.pos -- $3.value -- $2.someAttribute0' -- -- Everything else can be treated as uninterpreted strings. Our munging -- will wreck column alignment so attribute grammar specifications must -- not rely on layout. > agLexAll :: String -> Int -> ParseResult [AgToken] > agLexAll = aux [] > where aux toks [] _ = Right (reverse toks) > aux toks s l = agLexer (\t -> aux (t:toks)) s l > instance HasLexer AgToken where > lexToken = agLexer > agLexer :: (AgToken -> Pfunc a) -> Pfunc a > agLexer cont [] = cont AgTok_EOF [] > agLexer cont ('{':rest) = cont AgTok_LBrace rest > agLexer cont ('}':rest) = cont AgTok_RBrace rest > agLexer cont (';':rest) = cont AgTok_Semicolon rest > agLexer cont ('=':rest) = cont AgTok_Eq rest > agLexer cont ('w':'h':'e':'r':'e':rest) = cont AgTok_Where rest > agLexer cont ('$':'$':rest) = agLexAttribute cont (\a -> AgTok_SelfRef a) rest > agLexer cont ('$':'>':rest) = agLexAttribute cont (\a -> AgTok_RightmostRef a) rest > agLexer cont s@('$':rest) = > let (n,rest') = span isDigit rest > in if null n > then agLexUnknown cont s > else agLexAttribute cont (\a -> AgTok_SubRef (read n,a)) rest' > agLexer cont s@(c:rest) > | isSpace c = agLexer cont (dropWhile isSpace rest) > | otherwise = agLexUnknown cont s > agLexUnknown :: (AgToken -> Pfunc a) -> Pfunc a > agLexUnknown cont s = let (u,rest) = aux [] s in cont (AgTok_Unknown u) rest > where aux t [] = (reverse t,[]) > aux t ('$':c:cs) > | c /= '$' && not (isDigit c) = aux ('$':t) (c:cs) > | otherwise = (reverse t,'$':c:cs) > aux t (c:cs) > | isSpace c || c `elem` "{};=" = (reverse t,c:cs) > | otherwise = aux (c:t) cs > agLexAttribute :: (AgToken -> Pfunc a) -> (String -> AgToken) -> Pfunc a > agLexAttribute cont k ('.':x:xs) > | isLower x = let (ident,rest) = span (\c -> isAlphaNum c || c == '\'') xs in cont (k (x:ident)) rest > | otherwise = \_ -> Left "bad attribute identifier" > agLexAttribute cont k rest = cont (k "") rest happy-lib-2.1.7/frontend/src/Happy/Frontend/AttrGrammar/0000755000000000000000000000000007346545000021266 5ustar0000000000000000happy-lib-2.1.7/frontend/src/Happy/Frontend/AttrGrammar/Mangler.lhs0000644000000000000000000001531607346545000023371 0ustar0000000000000000/----------------------------------------------------------------------------- Special processing for attribute grammars for the Mangler. We re-parse the body of the code block and output the nasty-looking record manipulation and let binding goop (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > {-# LANGUAGE ScopedTypeVariables #-} > module Happy.Frontend.AttrGrammar.Mangler (rewriteAttributeGrammar) where > import Happy.Grammar > import Happy.Frontend.ParseMonad.Class > import Happy.Frontend.AttrGrammar > import Happy.Frontend.AttrGrammar.Parser > import Happy.Frontend.Mangler.Monad > import Data.List ( findIndices, groupBy, intersperse, nub ) > import Data.List ( sortBy ) > import Data.Maybe ( fromMaybe ) > import Control.Monad > rewriteAttributeGrammar :: [Name] -> [Name] -> String -> AttributeGrammarExtras -> M (String,[Index]) > rewriteAttributeGrammar lhs nonterm_names code ag = first we need to parse the body of the code block > case runFromStartP agParser code 0 of > Left msg -> do addErr ("error in attribute grammar rules: "++msg) > return ("",[]) > Right rules -> now we break the rules into three lists, one for synthesized attributes, one for inherited attributes, and one for conditionals > let ( selfRules :: [AgSelfAssign] > , subRules :: [AgSubAssign] > , conditions :: [AgConditional] > ) = partitionRules [] [] [] rules > attrNames = map fst $ attributes ag > defaultAttr = head attrNames now check that $i references are in range > in do let prods :: [Index] > prods = mentionedProductions rules > mapM_ checkArity prods and output the rules > rulesStr <- formatRules arity attrNames defaultAttr > allSubProductions selfRules > subRules conditions return the munged code body and all sub-productions mentioned > return (rulesStr,nub (allSubProductions++prods)) > where arity :: Index > arity = length lhs > partitionRules a b c [] = (a,b,c) > partitionRules a b c (RightmostAssign attr toks : xs) = partitionRules a (x:b) c xs > where x = MkAgSubAssign (arity,attr) toks > partitionRules a b c (SelfAssign x : xs) = partitionRules (x:a) b c xs > partitionRules a b c (SubAssign x : xs) = partitionRules a (x:b) c xs > partitionRules a b c (Conditional x : xs) = partitionRules a b (x:c) xs > allSubProductions = map (+1) (findIndices (`elem` nonterm_names) lhs) > mentionedProductions rules = [ i | (AgTok_SubRef (i,_)) <- concat (map getTokens rules) ] > getTokens (SelfAssign (MkAgSelfAssign _ toks)) = toks > getTokens (SubAssign (MkAgSubAssign _ toks)) = toks > getTokens (Conditional (MkAgConditional toks)) = toks > getTokens (RightmostAssign _ toks) = toks > > checkArity x = when (x > arity) $ addErr (show x++" out of range") ------------------------------------------------------------------------------------ -- Actually emit the code for the record bindings and conditionals -- > formatRules :: Index -> [String] -> String -> [Index] > -> [AgSelfAssign] -> [AgSubAssign] -> [AgConditional] > -> M String > formatRules arity _attrNames defaultAttr prods selfRules subRules conditions = return $ > concat [ "\\happyInhAttrs -> let { " > , "happySelfAttrs = happyInhAttrs",formattedSelfRules > , subProductionRules > , "; happyConditions = ", formattedConditions > , " } in (happyConditions,happySelfAttrs)" > ] > > where formattedSelfRules = case selfRules of [] -> []; _ -> "{ "++formattedSelfRules'++" }" > formattedSelfRules' = concat $ intersperse ", " $ map formatSelfRule selfRules > formatSelfRule (MkAgSelfAssign [] toks) = defaultAttr++" = "++(formatTokens toks) > formatSelfRule (MkAgSelfAssign attr toks) = attr++" = "++(formatTokens toks) > subRulesMap :: [(Int,[(String,[AgToken])])] > subRulesMap = map (\l -> foldr (\ (_,x) (i,xs) -> (i,x:xs)) > (fst $ head l,[snd $ head l]) > (tail l) ) . > groupBy (\x y -> (fst x) == (fst y)) . > sortBy (\x y -> compare (fst x) (fst y)) . > map (\(MkAgSubAssign (i,ident) toks) -> (i,(ident,toks))) $ subRules > subProductionRules = concat $ map formatSubRules prods > formatSubRules i = > let attrs = fromMaybe [] . lookup i $ subRulesMap > attrUpdates' = concat $ intersperse ", " $ map (formatSubRule i) attrs > attrUpdates = case attrUpdates' of [] -> []; x -> "{ "++x++" }" > in concat ["; (happyConditions_",show i,",happySubAttrs_",show i,") = ",mkHappyVar i > ," happyEmptyAttrs" > , attrUpdates > ] > > formattedConditions = concat $ intersperse " Prelude.++ " $ localConditions : (map (\i -> "happyConditions_"++(show i)) prods) > localConditions = "["++(concat $ intersperse ", " $ map formatCondition conditions)++"]" > formatCondition (MkAgConditional toks) = formatTokens toks > formatSubRule _ ([],toks) = defaultAttr++" = "++(formatTokens toks) > formatSubRule _ (attr,toks) = attr++" = "++(formatTokens toks) > formatTokens tokens = concat (map formatToken tokens) > formatToken AgTok_LBrace = "{ " > formatToken AgTok_RBrace = "} " > formatToken AgTok_Where = "where " > formatToken AgTok_Semicolon = "; " > formatToken AgTok_Eq = "=" > formatToken (AgTok_SelfRef []) = "("++defaultAttr++" happySelfAttrs) " > formatToken (AgTok_SelfRef x) = "("++x++" happySelfAttrs) " > formatToken (AgTok_RightmostRef x) = formatToken (AgTok_SubRef (arity,x)) > formatToken (AgTok_SubRef (i,[])) > | i `elem` prods = "("++defaultAttr++" happySubAttrs_"++(show i)++") " > | otherwise = mkHappyVar i ++ " " > formatToken (AgTok_SubRef (i,x)) > | i `elem` prods = "("++x++" happySubAttrs_"++(show i)++") " > | otherwise = error ("lhs "++(show i)++" is not a non-terminal") > formatToken (AgTok_Unknown x) = x++" " > formatToken AgTok_EOF = error "formatToken AgTok_EOF" > mkHappyVar :: Int -> String > mkHappyVar n = "happy_var_" ++ show n happy-lib-2.1.7/frontend/src/Happy/Frontend/AttrGrammar/Parser.hs0000644000000000000000000013230707346545000023064 0ustar0000000000000000{-# OPTIONS_GHC -w #-} {-# LANGUAGE CPP #-} {-# LANGUAGE MagicHash #-} {-# LANGUAGE BangPatterns #-} {-# LANGUAGE TypeSynonymInstances #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE PatternGuards #-} {-# LANGUAGE NoStrictData #-} {-# LANGUAGE UnboxedTuples #-} {-# LANGUAGE PartialTypeSignatures #-} {-# OPTIONS_GHC -w #-} module Happy.Frontend.AttrGrammar.Parser (agParser) where import Happy.Frontend.ParseMonad.Class import Happy.Frontend.ParseMonad import Happy.Frontend.AttrGrammar import qualified Control.Monad as Happy_Prelude import qualified Data.Bool as Happy_Prelude import qualified Data.Function as Happy_Prelude import qualified Data.Int as Happy_Prelude import qualified Data.List as Happy_Prelude import qualified Data.Maybe as Happy_Prelude import qualified Data.String as Happy_Prelude import qualified Data.Tuple as Happy_Prelude import qualified GHC.Err as Happy_Prelude import qualified GHC.Num as Happy_Prelude import qualified Text.Show as Happy_Prelude import qualified Data.Array as Happy_Data_Array import qualified Data.Bits as Bits import qualified GHC.Exts as Happy_GHC_Exts import Control.Applicative(Applicative(..)) import Control.Monad (ap) -- parser produced by Happy Version 2.1.6 newtype HappyAbsSyn = HappyAbsSyn HappyAny #if __GLASGOW_HASKELL__ >= 607 type HappyAny = Happy_GHC_Exts.Any #else type HappyAny = forall a . a #endif newtype HappyWrap5 = HappyWrap5 ([AgRule]) happyIn5 :: ([AgRule]) -> (HappyAbsSyn ) happyIn5 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap5 x) {-# INLINE happyIn5 #-} happyOut5 :: (HappyAbsSyn ) -> HappyWrap5 happyOut5 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut5 #-} newtype HappyWrap6 = HappyWrap6 ([AgRule]) happyIn6 :: ([AgRule]) -> (HappyAbsSyn ) happyIn6 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap6 x) {-# INLINE happyIn6 #-} happyOut6 :: (HappyAbsSyn ) -> HappyWrap6 happyOut6 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut6 #-} newtype HappyWrap7 = HappyWrap7 (AgRule) happyIn7 :: (AgRule) -> (HappyAbsSyn ) happyIn7 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap7 x) {-# INLINE happyIn7 #-} happyOut7 :: (HappyAbsSyn ) -> HappyWrap7 happyOut7 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut7 #-} newtype HappyWrap8 = HappyWrap8 ([AgToken]) happyIn8 :: ([AgToken]) -> (HappyAbsSyn ) happyIn8 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap8 x) {-# INLINE happyIn8 #-} happyOut8 :: (HappyAbsSyn ) -> HappyWrap8 happyOut8 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut8 #-} newtype HappyWrap9 = HappyWrap9 ([AgToken]) happyIn9 :: ([AgToken]) -> (HappyAbsSyn ) happyIn9 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap9 x) {-# INLINE happyIn9 #-} happyOut9 :: (HappyAbsSyn ) -> HappyWrap9 happyOut9 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut9 #-} happyInTok :: (AgToken) -> (HappyAbsSyn ) happyInTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyInTok #-} happyOutTok :: (HappyAbsSyn ) -> (AgToken) happyOutTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOutTok #-} {-# NOINLINE happyTokenStrings #-} happyTokenStrings = ["\"{\"","\"}\"","\";\"","\"=\"","where","selfRef","subRef","rightRef","unknown","%eof"] happyActOffsets :: HappyAddr happyActOffsets = HappyA# "\x0d\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\xfe\xff\xff\xff\x08\x00\x00\x00\x09\x00\x00\x00\x18\x00\x00\x00\x1a\x00\x00\x00\xfa\xff\xff\xff\x08\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x08\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1b\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x08\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1d\x00\x00\x00\x08\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00"# happyGotoOffsets :: HappyAddr happyGotoOffsets = HappyA# "\x17\x00\x00\x00\x0a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x1f\x00\x00\x00\x20\x00\x00\x00\x21\x00\x00\x00\x00\x00\x00\x00\x22\x00\x00\x00\x24\x00\x00\x00\x25\x00\x00\x00\x26\x00\x00\x00\x27\x00\x00\x00\x28\x00\x00\x00\x19\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x29\x00\x00\x00\x2a\x00\x00\x00\x2b\x00\x00\x00\x2c\x00\x00\x00\x2d\x00\x00\x00\x2f\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x32\x00\x00\x00\x00\x00\x00\x00\x33\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyDefActions :: HappyAddr happyDefActions = HappyA# "\xfb\xff\xff\xff\x00\x00\x00\x00\xfe\xff\xff\xff\xfc\xff\xff\xff\xf0\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xf0\xff\xff\xff\xf0\xff\xff\xff\xf0\xff\xff\xff\xf7\xff\xff\xff\xe8\xff\xff\xff\xf0\xff\xff\xff\xf0\xff\xff\xff\xf0\xff\xff\xff\xf0\xff\xff\xff\xf0\xff\xff\xff\xfb\xff\xff\xff\xfd\xff\xff\xff\xf1\xff\xff\xff\xf2\xff\xff\xff\xf3\xff\xff\xff\xf4\xff\xff\xff\xf5\xff\xff\xff\x00\x00\x00\x00\xe8\xff\xff\xff\xe8\xff\xff\xff\xe8\xff\xff\xff\xe8\xff\xff\xff\xe8\xff\xff\xff\xf0\xff\xff\xff\xe8\xff\xff\xff\xfa\xff\xff\xff\xf9\xff\xff\xff\xf8\xff\xff\xff\xe9\xff\xff\xff\xea\xff\xff\xff\xeb\xff\xff\xff\xec\xff\xff\xff\xee\xff\xff\xff\xed\xff\xff\xff\x00\x00\x00\x00\xf0\xff\xff\xff\xf6\xff\xff\xff\xe8\xff\xff\xff\xef\xff\xff\xff"# happyCheck :: HappyAddr happyCheck = HappyA# "\xff\xff\xff\xff\x02\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x0b\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x05\x00\x00\x00\x05\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x02\x00\x00\x00\xff\xff\xff\xff\x05\x00\x00\x00\x03\x00\x00\x00\x05\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\xff\xff\xff\xff\x04\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\x03\x00\x00\x00\xff\xff\xff\xff\x04\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\xff\xff\xff\xff\x04\x00\x00\x00\x03\x00\x00\x00\xff\xff\xff\xff\x04\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# happyTable :: HappyAddr happyTable = HappyA# "\x00\x00\x00\x00\x1c\x00\x00\x00\x14\x00\x00\x00\x1d\x00\x00\x00\x1e\x00\x00\x00\xff\xff\xff\xff\x1f\x00\x00\x00\x20\x00\x00\x00\x21\x00\x00\x00\x22\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x0f\x00\x00\x00\x0c\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x14\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x0b\x00\x00\x00\x2d\x00\x00\x00\x0a\x00\x00\x00\x2f\x00\x00\x00\x0c\x00\x00\x00\x24\x00\x00\x00\x23\x00\x00\x00\x22\x00\x00\x00\x00\x00\x00\x00\x1a\x00\x00\x00\x19\x00\x00\x00\x18\x00\x00\x00\x17\x00\x00\x00\x16\x00\x00\x00\x15\x00\x00\x00\x00\x00\x00\x00\x2b\x00\x00\x00\x2a\x00\x00\x00\x29\x00\x00\x00\x28\x00\x00\x00\x27\x00\x00\x00\x26\x00\x00\x00\x00\x00\x00\x00\x25\x00\x00\x00\x2d\x00\x00\x00\x00\x00\x00\x00\x2f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyReduceArr = Happy_Data_Array.array (1, 23) [ (1 , happyReduce_1), (2 , happyReduce_2), (3 , happyReduce_3), (4 , happyReduce_4), (5 , happyReduce_5), (6 , happyReduce_6), (7 , happyReduce_7), (8 , happyReduce_8), (9 , happyReduce_9), (10 , happyReduce_10), (11 , happyReduce_11), (12 , happyReduce_12), (13 , happyReduce_13), (14 , happyReduce_14), (15 , happyReduce_15), (16 , happyReduce_16), (17 , happyReduce_17), (18 , happyReduce_18), (19 , happyReduce_19), (20 , happyReduce_20), (21 , happyReduce_21), (22 , happyReduce_22), (23 , happyReduce_23) ] happyRuleArr :: HappyAddr happyRuleArr = HappyA# "\x00\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x00\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x02\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00"# happyCatchStates :: [Happy_Prelude.Int] happyCatchStates = [] happy_n_terms = 12 :: Happy_Prelude.Int happy_n_nonterms = 5 :: Happy_Prelude.Int happy_n_starts = 1 :: Happy_Prelude.Int happyReduce_1 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_1 = happySpecReduce_1 0# happyReduction_1 happyReduction_1 happy_x_1 = case happyOut6 happy_x_1 of { (HappyWrap6 happy_var_1) -> happyIn5 (happy_var_1 )} happyReduce_2 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_2 = happySpecReduce_3 1# happyReduction_2 happyReduction_2 happy_x_3 happy_x_2 happy_x_1 = case happyOut7 happy_x_1 of { (HappyWrap7 happy_var_1) -> case happyOut6 happy_x_3 of { (HappyWrap6 happy_var_3) -> happyIn6 (happy_var_1 : happy_var_3 )}} happyReduce_3 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_3 = happySpecReduce_1 1# happyReduction_3 happyReduction_3 happy_x_1 = case happyOut7 happy_x_1 of { (HappyWrap7 happy_var_1) -> happyIn6 (happy_var_1 : [] )} happyReduce_4 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_4 = happySpecReduce_0 1# happyReduction_4 happyReduction_4 = happyIn6 ([] ) happyReduce_5 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_5 = happySpecReduce_3 2# happyReduction_5 happyReduction_5 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_3 of { (HappyWrap8 happy_var_3) -> happyIn7 (SelfAssign $ MkAgSelfAssign (selfRefVal happy_var_1) happy_var_3 )}} happyReduce_6 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_6 = happySpecReduce_3 2# happyReduction_6 happyReduction_6 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_3 of { (HappyWrap8 happy_var_3) -> happyIn7 (SubAssign $ MkAgSubAssign (subRefVal happy_var_1) happy_var_3 )}} happyReduce_7 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_7 = happySpecReduce_3 2# happyReduction_7 happyReduction_7 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_3 of { (HappyWrap8 happy_var_3) -> happyIn7 (RightmostAssign (rightRefVal happy_var_1) happy_var_3 )}} happyReduce_8 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_8 = happySpecReduce_2 2# happyReduction_8 happyReduction_8 happy_x_2 happy_x_1 = case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn7 (Conditional $ MkAgConditional happy_var_2 )} happyReduce_9 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_9 = happyReduce 4# 3# happyReduction_9 happyReduction_9 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> case happyOutTok happy_x_3 of { happy_var_3 -> case happyOut8 happy_x_4 of { (HappyWrap8 happy_var_4) -> happyIn8 ([happy_var_1] ++ happy_var_2 ++ [happy_var_3] ++ happy_var_4 ) `HappyStk` happyRest}}}} happyReduce_10 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_10 = happySpecReduce_2 3# happyReduction_10 happyReduction_10 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_11 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_11 = happySpecReduce_2 3# happyReduction_11 happyReduction_11 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_12 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_12 = happySpecReduce_2 3# happyReduction_12 happyReduction_12 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_13 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_13 = happySpecReduce_2 3# happyReduction_13 happyReduction_13 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_14 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_14 = happySpecReduce_2 3# happyReduction_14 happyReduction_14 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn8 (happy_var_1 : happy_var_2 )}} happyReduce_15 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_15 = happySpecReduce_0 3# happyReduction_15 happyReduction_15 = happyIn8 ([] ) happyReduce_16 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_16 = happyReduce 4# 4# happyReduction_16 happyReduction_16 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> case happyOutTok happy_x_3 of { happy_var_3 -> case happyOut9 happy_x_4 of { (HappyWrap9 happy_var_4) -> happyIn9 ([happy_var_1] ++ happy_var_2 ++ [happy_var_3] ++ happy_var_4 ) `HappyStk` happyRest}}}} happyReduce_17 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_17 = happySpecReduce_2 4# happyReduction_17 happyReduction_17 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> happyIn9 (happy_var_1 : happy_var_2 )}} happyReduce_18 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_18 = happySpecReduce_2 4# happyReduction_18 happyReduction_18 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> happyIn9 (happy_var_1 : happy_var_2 )}} happyReduce_19 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_19 = happySpecReduce_2 4# happyReduction_19 happyReduction_19 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> happyIn9 (happy_var_1 : happy_var_2 )}} happyReduce_20 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_20 = happySpecReduce_2 4# happyReduction_20 happyReduction_20 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> happyIn9 (happy_var_1 : happy_var_2 )}} happyReduce_21 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_21 = happySpecReduce_2 4# happyReduction_21 happyReduction_21 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn9 (happy_var_1 : happy_var_2 )}} happyReduce_22 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_22 = happySpecReduce_2 4# happyReduction_22 happyReduction_22 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { happy_var_1 -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> happyIn9 (happy_var_1 : happy_var_2 )}} happyReduce_23 :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_23 = happySpecReduce_0 4# happyReduction_23 happyReduction_23 = happyIn9 ([] ) happyTerminalToTok term = case term of { AgTok_EOF -> 11#; AgTok_LBrace -> 2#; AgTok_RBrace -> 3#; AgTok_Semicolon -> 4#; AgTok_Eq -> 5#; AgTok_Where -> 6#; AgTok_SelfRef _ -> 7#; AgTok_SubRef _ -> 8#; AgTok_RightmostRef _ -> 9#; AgTok_Unknown _ -> 10#; _ -> -1#; } {-# NOINLINE happyTerminalToTok #-} happyLex kend kmore = lexTokenP (\tk -> case tk of { AgTok_EOF -> kend tk; _ -> kmore (happyTerminalToTok tk) tk }) {-# INLINE happyLex #-} happyNewToken action sts stk = happyLex (\tk -> happyDoAction 11# tk action sts stk) (\i tk -> happyDoAction i tk action sts stk) happyReport 11# = happyReport' happyReport _ = happyReport' happyThen :: () => (P a) -> (a -> (P b)) -> (P b) happyThen = (Happy_Prelude.>>=) happyReturn :: () => a -> (P a) happyReturn = (Happy_Prelude.return) happyParse :: () => Happy_GHC_Exts.Int# -> P (HappyAbsSyn ) happyNewToken :: () => Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> (P (HappyAbsSyn )) happyDoAction :: () => Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> (P (HappyAbsSyn )) happyReduceArr :: () => Happy_Data_Array.Array Happy_Prelude.Int (Happy_GHC_Exts.Int# -> AgToken -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> (P (HappyAbsSyn ))) happyThen1 :: () => P a -> (a -> P b) -> P b happyThen1 = happyThen happyFmap1 f m = happyThen m (\a -> happyReturn (f a)) happyReturn1 :: () => a -> (P a) happyReturn1 = happyReturn happyReport' :: () => (AgToken) -> [Happy_Prelude.String] -> (P a) -> (P a) happyReport' = (\tokens expected resume -> happyError) happyAbort :: () => (P a) happyAbort = Happy_Prelude.error "Called abort handler in non-resumptive parser" agParser = happySomeParser where happySomeParser = happyThen (happyParse 0#) (\x -> happyReturn (let {(HappyWrap5 x') = happyOut5 x} in x')) happySeq = happyDontSeq happyError :: P a happyError = failP (\l -> show l ++ ": Parse error\n") #define HAPPY_COERCE 1 -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ #if !defined(__GLASGOW_HASKELL__) # error This code isn't being built with GHC. #endif -- Get WORDS_BIGENDIAN (if defined) #include "MachDeps.h" -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Happy_Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Happy_Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Happy_Prelude.Bool) #define PLUS(n,m) (n Happy_GHC_Exts.+# m) #define MINUS(n,m) (n Happy_GHC_Exts.-# m) #define TIMES(n,m) (n Happy_GHC_Exts.*# m) #define NEGATE(n) (Happy_GHC_Exts.negateInt# (n)) type Happy_Int = Happy_GHC_Exts.Int# data Happy_IntList = HappyCons Happy_Int Happy_IntList #define INVALID_TOK -1# #define ERROR_TOK 0# #define CATCH_TOK 1# #if defined(HAPPY_COERCE) # define GET_ERROR_TOKEN(x) (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# i) -> i }) # define MK_ERROR_TOKEN(i) (Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# i)) # define MK_TOKEN(x) (happyInTok (x)) #else # define GET_ERROR_TOKEN(x) (case x of { HappyErrorToken (Happy_GHC_Exts.I# i) -> i }) # define MK_ERROR_TOKEN(i) (HappyErrorToken (Happy_GHC_Exts.I# i)) # define MK_TOKEN(x) (HappyTerminal (x)) #endif #if defined(HAPPY_DEBUG) # define DEBUG_TRACE(s) (happyTrace (s)) Happy_Prelude.$ happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO Happy_Prelude.$ do Happy_System_IO.hPutStr Happy_System_IO.stderr string Happy_Prelude.return expr #else # define DEBUG_TRACE(s) {- nothing -} #endif infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept ERROR_TOK tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = DEBUG_TRACE("state: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ",\ttoken: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ ",\taction: ") case happyDecodeAction (happyNextAction i st) of HappyFail -> DEBUG_TRACE("failing.\n") happyFail i tk st HappyAccept -> DEBUG_TRACE("accept.\n") happyAccept i tk st HappyReduce rule -> DEBUG_TRACE("reduce (rule " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# rule) Happy_Prelude.++ ")") (happyReduceArr Happy_Data_Array.! (Happy_GHC_Exts.I# rule)) i tk st HappyShift new_state -> DEBUG_TRACE("shift, enter state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") happyShift new_state i tk st {-# INLINE happyNextAction #-} happyNextAction i st = case happyIndexActionTable i st of Happy_Prelude.Just (Happy_GHC_Exts.I# act) -> act Happy_Prelude.Nothing -> happyIndexOffAddr happyDefActions st {-# INLINE happyIndexActionTable #-} happyIndexActionTable i st | GTE(i, 0#), GTE(off, 0#), EQ(happyIndexOffAddr happyCheck off, i) -- i >= 0: Guard against INVALID_TOK (do the default action, which ultimately errors) -- off >= 0: Otherwise it's a default action -- equality check: Ensure that the entry in the compressed array is owned by st = Happy_Prelude.Just (Happy_GHC_Exts.I# (happyIndexOffAddr happyTable off)) | Happy_Prelude.otherwise = Happy_Prelude.Nothing where off = PLUS(happyIndexOffAddr happyActOffsets st, i) data HappyAction = HappyFail | HappyAccept | HappyReduce Happy_Int -- rule number | HappyShift Happy_Int -- new state deriving Happy_Prelude.Show {-# INLINE happyDecodeAction #-} happyDecodeAction :: Happy_Int -> HappyAction happyDecodeAction 0# = HappyFail happyDecodeAction -1# = HappyAccept happyDecodeAction action | LT(action, 0#) = HappyReduce NEGATE(PLUS(action, 1#)) | Happy_Prelude.otherwise = HappyShift MINUS(action, 1#) {-# INLINE happyIndexGotoTable #-} happyIndexGotoTable nt st = happyIndexOffAddr happyTable off where off = PLUS(happyIndexOffAddr happyGotoOffsets st, nt) {-# INLINE happyIndexOffAddr #-} happyIndexOffAddr :: HappyAddr -> Happy_Int -> Happy_Int happyIndexOffAddr (HappyA# arr) off = #if __GLASGOW_HASKELL__ >= 901 Happy_GHC_Exts.int32ToInt# -- qualified import because it doesn't exist on older GHC's #endif #ifdef WORDS_BIGENDIAN -- The CI of `alex` tests this code path (Happy_GHC_Exts.word32ToInt32# (Happy_GHC_Exts.wordToWord32# (Happy_GHC_Exts.byteSwap32# (Happy_GHC_Exts.word32ToWord# (Happy_GHC_Exts.int32ToWord32# #endif (Happy_GHC_Exts.indexInt32OffAddr# arr off) #ifdef WORDS_BIGENDIAN ))))) #endif happyIndexRuleArr :: Happy_Int -> (# Happy_Int, Happy_Int #) happyIndexRuleArr r = (# nt, len #) where !(Happy_GHC_Exts.I# n_starts) = happy_n_starts offs = TIMES(MINUS(r,n_starts),2#) nt = happyIndexOffAddr happyRuleArr offs len = happyIndexOffAddr happyRuleArr PLUS(offs,1#) data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state ERROR_TOK tk st sts stk@(x `HappyStk` _) = -- See "Error Fixup" below let i = GET_ERROR_TOKEN(x) in DEBUG_TRACE("shifting the error token") happyDoAction i tk new_state (HappyCons st sts) stk happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons st sts) (MK_TOKEN(tk) `HappyStk` stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 nt fn j tk st sts stk = happySeq fn (happyGoto nt j tk st (HappyCons st sts) (fn `HappyStk` stk)) happySpecReduce_1 nt fn j tk old_st sts@(HappyCons st _) (v1 `HappyStk` stk') = let r = fn v1 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happySpecReduce_2 nt fn j tk old_st (HappyCons _ sts@(HappyCons st _)) (v1 `HappyStk` v2 `HappyStk` stk') = let r = fn v1 v2 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happySpecReduce_3 nt fn j tk old_st (HappyCons _ (HappyCons _ sts@(HappyCons st _))) (v1 `HappyStk` v2 `HappyStk` v3 `HappyStk` stk') = let r = fn v1 v2 v3 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happyReduce k nt fn j tk st sts stk = case happyDrop MINUS(k,(1# :: Happy_Int)) sts of sts1@(HappyCons st1 _) -> let r = fn stk in -- it doesn't hurt to always seq here... st `happyTcHack` happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons st sts) of sts1@(HappyCons st1 _) -> let drop_stk = happyDropStk k stk in j `happyTcHack` happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons st sts) of sts1@(HappyCons st1 _) -> let drop_stk = happyDropStk k stk off = happyIndexOffAddr happyGotoOffsets st1 off_i = PLUS(off, nt) new_state = happyIndexOffAddr happyTable off_i in j `happyTcHack` happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons _ t) = happyDrop MINUS(n,(1# :: Happy_Int)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk MINUS(n,(1#::Happy_Int)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = DEBUG_TRACE(", goto state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") happyDoAction j tk new_state where new_state = happyIndexGotoTable nt st {- Note [Error recovery] ~~~~~~~~~~~~~~~~~~~~~~~~ When there is no applicable action for the current lookahead token `tk`, happy enters error recovery mode. Depending on whether the grammar file declares the two action form `%error { abort } { report }` for Resumptive Error Handling, it works in one (not resumptive) or two phases (resumptive): 1. Fixup mode: Try to see if there is an action for the error token ERROR_TOK. If there is, do *not* emit an error and pretend instead that an `error` token was inserted. When there is no ERROR_TOK action, report an error. In non-resumptive error handling, calling the single error handler (e.g. `happyError`) will throw an exception and abort the parser. However, in resumptive error handling we enter *error resumption mode*. 2. Error resumption mode: After reporting the error (with `report`), happy will attempt to find a good state stack to resume parsing in. For each candidate stack, it discards input until one of the candidates resumes (i.e. shifts the current input). If no candidate resumes before the end of input, resumption failed and calls the `abort` function, to much the same effect as in non-resumptive error handling. Candidate stacks are declared by the grammar author using the special `catch` terminal and called "catch frames". This mechanism is described in detail in Note [happyResume]. The `catch` resumption mechanism (2) is what usually is associated with `error` in `bison` or `menhir`. Since `error` is used for the Fixup mechanism (1) above, we call the corresponding token `catch`. Furthermore, in constrast to `bison`, our implementation of `catch` non-deterministically considers multiple catch frames on the stack for resumption (See Note [Multiple catch frames]). Note [happyResume] ~~~~~~~~~~~~~~~~~~ `happyResume` implements the resumption mechanism from Note [Error recovery]. It is best understood by example. Consider Exp :: { String } Exp : '1' { "1" } | catch { "catch" } | Exp '+' Exp %shift { $1 Happy_Prelude.++ " + " Happy_Prelude.++ $3 } -- %shift: associate 1 + 1 + 1 to the right | '(' Exp ')' { "(" Happy_Prelude.++ $2 Happy_Prelude.++ ")" } The idea of the use of `catch` here is that upon encountering a parse error during expression parsing, we can gracefully degrade using the `catch` rule, still producing a partial syntax tree and keep on parsing to find further syntax errors. Let's trace the parser state for input 11+1, which will error out after shifting 1. After shifting, we have the following item stack (growing downwards and omitting transitive closure items): State 0: %start_parseExp -> . Exp State 5: Exp -> '1' . (Stack as a list of state numbers: [5,0].) As Note [Error recovery] describes, we will first try Fixup mode. That fails because no production can shift the `error` token. Next we try Error resumption mode. This works as follows: 1. Pop off the item stack until we find an item that can shift the `catch` token. (Implemented in `pop_items`.) * State 5 cannot shift catch. Pop. * State 0 can shift catch, which would transition into State 4: Exp -> catch . So record the *stack* `[4,0]` after doing the shift transition. We call this a *catch frame*, where the top is a *catch state*, corresponding to an item in which we just shifted a `catch` token. There can be multiple such catch stacks, see Note [Multiple catch frames]. 2. Discard tokens from the input until the lookahead can be shifted in one of the catch stacks. (Implemented in `discard_input_until_exp` and `some_catch_state_shifts`.) * We cannot shift the current lookahead '1' in state 4, so we discard * We *can* shift the next lookahead '+' in state 4, but only after reducing, which pops State 4 and goes to State 3: State 3: %start_parseExp -> Exp . Exp -> Exp . '+' Exp Here we can shift '+'. As you can see, to implement this machinery we need to simulate the operation of the LALR automaton, especially reduction (`happySimulateReduce`). Note [Multiple catch frames] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For fewer spurious error messages, it can be beneficial to trace multiple catch items. Consider Exp : '1' | catch | Exp '+' Exp %shift | '(' Exp ')' Let's trace the parser state for input (;+1, which will error out after shifting (. After shifting, we have the following item stack (growing downwards): State 0: %start_parseExp -> . Exp State 6: Exp -> '(' . Exp ')' Upon error, we want to find items in the stack which can shift a catch token. Note that both State 0 and State 6 can shift a catch token, transitioning into State 4: Exp -> catch . Hence we record the catch frames `[4,6,0]` and `[4,0]` for possible resumption. Which catch frame do we pick for resumption? Note that resuming catch frame `[4,0]` will parse as "catch+1", whereas resuming the innermost frame `[4,6,0]` corresponds to parsing "(catch+1". The latter would keep discarding input until the closing ')' is found. So we will discard + and 1, leading to a spurious syntax error at the end of input, aborting the parse and never producing a partial syntax tree. Bad! It is far preferable to resume with catch frame `[4,0]`, where we can resume successfully on input +, so that is what we do. In general, we pick the catch frame for resumption that discards the least amount of input for a successful shift, preferring the topmost such catch frame. -} -- happyFail :: Happy_Int -> Token -> Happy_Int -> _ -- This function triggers Note [Error recovery]. -- If the current token is ERROR_TOK, phase (1) has failed and we might try -- phase (2). happyFail ERROR_TOK = happyFixupFailed happyFail i = happyTryFixup i -- Enter Error Fixup (see Note [Error recovery]): -- generate an error token, save the old token and carry on. -- When a `happyShift` accepts the error token, we will pop off the error token -- to resume parsing with the current lookahead `i`. happyTryFixup i tk action sts stk = DEBUG_TRACE("entering `error` fixup.\n") happyDoAction ERROR_TOK tk action sts (MK_ERROR_TOKEN(i) `HappyStk` stk) -- NB: `happyShift` will simply pop the error token and carry on with -- `tk`. Hence we don't change `tk` in the call here -- See Note [Error recovery], phase (2). -- Enter resumption mode after reporting the error by calling `happyResume`. happyFixupFailed tk st sts (x `HappyStk` stk) = let i = GET_ERROR_TOKEN(x) in DEBUG_TRACE("`error` fixup failed.\n") let resume = happyResume i tk st sts stk expected = happyExpectedTokens st sts in happyReport i tk expected resume -- happyResume :: Happy_Int -> Token -> Happy_Int -> _ -- See Note [happyResume] happyResume i tk st sts stk = pop_items [] st sts stk where !(Happy_GHC_Exts.I# n_starts) = happy_n_starts -- this is to test whether we have a start token !(Happy_GHC_Exts.I# eof_i) = happy_n_terms Happy_Prelude.- 1 -- this is the token number of the EOF token happy_list_to_list :: Happy_IntList -> [Happy_Prelude.Int] happy_list_to_list (HappyCons st sts) | LT(st, n_starts) = [(Happy_GHC_Exts.I# st)] | Happy_Prelude.otherwise = (Happy_GHC_Exts.I# st) : happy_list_to_list sts -- See (1) of Note [happyResume] pop_items catch_frames st sts stk | LT(st, n_starts) = DEBUG_TRACE("reached start state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", ") if Happy_Prelude.null catch_frames_new then DEBUG_TRACE("no resumption.\n") happyAbort else DEBUG_TRACE("now discard input, trying to anchor in states " Happy_Prelude.++ Happy_Prelude.show (Happy_Prelude.map (happy_list_to_list . Happy_Prelude.fst) (Happy_Prelude.reverse catch_frames_new)) Happy_Prelude.++ ".\n") discard_input_until_exp i tk (Happy_Prelude.reverse catch_frames_new) | (HappyCons st1 sts1) <- sts, _ `HappyStk` stk1 <- stk = pop_items catch_frames_new st1 sts1 stk1 where !catch_frames_new | HappyShift new_state <- happyDecodeAction (happyNextAction CATCH_TOK st) , DEBUG_TRACE("can shift catch token in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", into state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") Happy_Prelude.null (Happy_Prelude.filter (\(HappyCons _ (HappyCons h _),_) -> EQ(st,h)) catch_frames) = (HappyCons new_state (HappyCons st sts), MK_ERROR_TOKEN(i) `HappyStk` stk):catch_frames -- MK_ERROR_TOKEN(i) is just some dummy that should not be accessed by user code | Happy_Prelude.otherwise = DEBUG_TRACE("already shifted or can't shift catch in " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ "\n") catch_frames -- See (2) of Note [happyResume] discard_input_until_exp i tk catch_frames | Happy_Prelude.Just (HappyCons st (HappyCons catch_st sts), catch_frame) <- some_catch_state_shifts i catch_frames = DEBUG_TRACE("found expected token in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ " after shifting from " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# catch_st) Happy_Prelude.++ ": " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ "\n") happyDoAction i tk st (HappyCons catch_st sts) catch_frame | EQ(i,eof_i) -- is i EOF? = DEBUG_TRACE("reached EOF, cannot resume. abort parse :(\n") happyAbort | Happy_Prelude.otherwise = DEBUG_TRACE("discard token " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ "\n") happyLex (\eof_tk -> discard_input_until_exp eof_i eof_tk catch_frames) -- eof (\i tk -> discard_input_until_exp i tk catch_frames) -- not eof some_catch_state_shifts _ [] = DEBUG_TRACE("no catch state could shift.\n") Happy_Prelude.Nothing some_catch_state_shifts i catch_frames@(((HappyCons st sts),_):_) = try_head i st sts catch_frames where try_head i st sts catch_frames = -- PRECONDITION: head catch_frames = (HappyCons st sts) DEBUG_TRACE("trying token " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ " in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ": ") case happyDecodeAction (happyNextAction i st) of HappyFail -> DEBUG_TRACE("fail.\n") some_catch_state_shifts i (Happy_Prelude.tail catch_frames) HappyAccept -> DEBUG_TRACE("accept.\n") Happy_Prelude.Just (Happy_Prelude.head catch_frames) HappyShift _ -> DEBUG_TRACE("shift.\n") Happy_Prelude.Just (Happy_Prelude.head catch_frames) HappyReduce r -> case happySimulateReduce r st sts of (HappyCons st1 sts1) -> try_head i st1 sts1 catch_frames happySimulateReduce r st sts = DEBUG_TRACE("simulate reduction of rule " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# r) Happy_Prelude.++ ", ") let (# nt, len #) = happyIndexRuleArr r in DEBUG_TRACE("nt " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# nt) Happy_Prelude.++ ", len: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# len) Happy_Prelude.++ ", new_st ") let !(sts1@(HappyCons st1 _)) = happyDrop len (HappyCons st sts) new_st = happyIndexGotoTable nt st1 in DEBUG_TRACE(Happy_Prelude.show (Happy_GHC_Exts.I# new_st) Happy_Prelude.++ ".\n") (HappyCons new_st sts1) happyTokenToString :: Happy_Prelude.Int -> Happy_Prelude.String happyTokenToString i = happyTokenStrings Happy_Prelude.!! (i Happy_Prelude.- 2) -- 2: errorTok, catchTok happyExpectedTokens :: Happy_Int -> Happy_IntList -> [Happy_Prelude.String] -- Upon a parse error, we want to suggest tokens that are expected in that -- situation. This function computes such tokens. -- It works by examining the top of the state stack. -- For every token number that does a shift transition, record that token number. -- For every token number that does a reduce transition, simulate that reduction -- on the state state stack and repeat. -- The recorded token numbers are then formatted with 'happyTokenToString' and -- returned. happyExpectedTokens st sts = DEBUG_TRACE("constructing expected tokens.\n") Happy_Prelude.map happyTokenToString (search_shifts st sts []) where search_shifts st sts shifts = Happy_Prelude.foldr (add_action st sts) shifts (distinct_actions st) add_action st sts (Happy_GHC_Exts.I# i, Happy_GHC_Exts.I# act) shifts = DEBUG_TRACE("found action in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", input " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ ", " Happy_Prelude.++ Happy_Prelude.show (happyDecodeAction act) Happy_Prelude.++ "\n") case happyDecodeAction act of HappyFail -> shifts HappyAccept -> shifts -- This would always be %eof or error... Not helpful HappyShift _ -> Happy_Prelude.insert (Happy_GHC_Exts.I# i) shifts HappyReduce r -> case happySimulateReduce r st sts of (HappyCons st1 sts1) -> search_shifts st1 sts1 shifts distinct_actions st -- The (token number, action) pairs of all actions in the given state = ((-1), (Happy_GHC_Exts.I# (happyIndexOffAddr happyDefActions st))) : [ (i, act) | i <- [begin_i..happy_n_terms], act <- get_act row_off i ] where row_off = happyIndexOffAddr happyActOffsets st begin_i = 2 -- +2: errorTok,catchTok get_act off (Happy_GHC_Exts.I# i) -- happyIndexActionTable with cached row offset | let off_i = PLUS(off,i) , GTE(off_i,0#) , EQ(happyIndexOffAddr happyCheck off_i,i) = [(Happy_GHC_Exts.I# (happyIndexOffAddr happyTable off_i))] | Happy_Prelude.otherwise = [] -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Happy_Prelude.error "Internal Happy parser panic. This is not supposed to happen! Please open a bug report at https://github.com/haskell/happy/issues.\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_Int -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Happy_GHC_Exts.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-lib-2.1.7/frontend/src/Happy/Frontend/Lexer.lhs0000644000000000000000000002601607346545000020641 0ustar0000000000000000----------------------------------------------------------------------------- The lexer. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Happy.Frontend.Lexer ( > Token(..), > TokenId(..), > HasLexer(..) ) where > import Happy.Frontend.ParseMonad.Class > import Data.Char ( isSpace, isAlphaNum, isDigit, digitToInt ) > data Token > = TokenInfo String TokenId > | TokenNum Int TokenId > | TokenKW TokenId > | TokenEOF > tokenToId :: Token -> TokenId > tokenToId (TokenInfo _ i) = i > tokenToId (TokenNum _ i) = i > tokenToId (TokenKW i) = i > tokenToId TokenEOF = error "tokenToId TokenEOF" > instance Eq Token where > i == i' = tokenToId i == tokenToId i' > instance Ord Token where > i <= i' = tokenToId i <= tokenToId i' > data TokenId > = TokId -- words and symbols > | TokSpecId_TokenType -- %tokentype > | TokSpecId_Token -- %token > | TokSpecId_Name -- %name > | TokSpecId_Partial -- %partial > | TokSpecId_Lexer -- %lexer > | TokSpecId_ImportedIdentity -- %importedidentity > | TokSpecId_Monad -- %monad > | TokSpecId_Nonassoc -- %nonassoc > | TokSpecId_Left -- %left > | TokSpecId_Right -- %right > | TokSpecId_Prec -- %prec > | TokSpecId_Shift -- %shift > | TokSpecId_Expect -- %expect > | TokSpecId_Error -- %error > | TokSpecId_ErrorExpected -- %error.expected > | TokSpecId_ErrorHandlerType -- %errorhandlertype > | TokSpecId_Attributetype -- %attributetype > | TokSpecId_Attribute -- %attribute > | TokPragmaQuote -- stuff inside {-# .. #-} > | TokCodeQuote -- stuff inside { .. } > | TokColon -- : > | TokSemiColon -- ; > | TokDoubleColon -- :: > | TokDoublePercent -- %% > | TokBar -- | > | TokNum -- Integer > | TokParenL -- ( > | TokParenR -- ) > | TokComma -- , > deriving (Eq,Ord,Show) ToDo: proper text instance here, for use in parser error messages. > instance HasLexer Token where > lexToken = lexer > lexer :: (Token -> Pfunc a) -> Pfunc a > lexer cont = lexer' > where lexer' "" = cont TokenEOF "" > lexer' ('-':'-':r) = lexer' (dropWhile (/= '\n') r) > lexer' ('{':'-':'#':r) = lexPragma cont r > lexer' ('{':'-':r) = \line -> lexNestedComment line lexer' r line > lexer' (c:rest) = nextLex cont c rest > nextLex :: (Token -> Pfunc a) -> Char -> String -> Int -> ParseResult a > nextLex cont c = case c of > '\n' -> \rest line -> lexer cont rest (line+1) > '%' -> lexPercent cont > ':' -> lexColon cont > ';' -> cont (TokenKW TokSemiColon) > '|' -> cont (TokenKW TokBar) > '\'' -> lexChar cont > '"'{-"-}-> lexString cont > '{' -> lexCode cont > '(' -> cont (TokenKW TokParenL) > ')' -> cont (TokenKW TokParenR) > ',' -> cont (TokenKW TokComma) > _ > | isSpace c -> lexer cont > | c >= 'a' && c <= 'z' > || c >= 'A' && c <= 'Z' -> lexId cont c > | isDigit c -> lexNum cont c > _ -> lexError ("lexical error before `" ++ c : "'") Percents come in two forms, in pairs, or followed by a special identifier. > lexPercent :: (Token -> Pfunc a) -> [Char] -> Int -> ParseResult a > lexPercent cont s = case s of > '%':rest -> cont (TokenKW TokDoublePercent) rest > 't':'o':'k':'e':'n':'t':'y':'p':'e':rest | end_of_id rest -> > cont (TokenKW TokSpecId_TokenType) rest > 't':'o':'k':'e':'n':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Token) rest > 'n':'a':'m':'e':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Name) rest > 'p':'a':'r':'t':'i':'a':'l':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Partial) rest > 'i':'m':'p':'o':'r':'t':'e':'d':'i':'d':'e':'n':'t':'i':'t':'y':rest | end_of_id rest -> > cont (TokenKW TokSpecId_ImportedIdentity) rest > 'm':'o':'n':'a':'d':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Monad) rest > 'l':'e':'x':'e':'r':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Lexer) rest > 'n':'o':'n':'a':'s':'s':'o':'c':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Nonassoc) rest > 'l':'e':'f':'t':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Left) rest > 'r':'i':'g':'h':'t':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Right) rest > 'p':'r':'e':'c':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Prec) rest > 's':'h':'i':'f':'t':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Shift) rest > 'e':'x':'p':'e':'c':'t':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Expect) rest > 'e':'r':'r':'o':'r':'.':'e':'x':'p':'e':'c':'t':'e':'d':rest | end_of_id rest -> > cont (TokenKW TokSpecId_ErrorExpected) rest > 'e':'r':'r':'o':'r':'h':'a':'n':'d':'l':'e':'r':'t':'y':'p':'e':rest | end_of_id rest -> > cont (TokenKW TokSpecId_ErrorHandlerType) rest > 'e':'r':'r':'o':'r':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Error) rest > 'a':'t':'t':'r':'i':'b':'u':'t':'e':'t':'y':'p':'e':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Attributetype) rest > 'a':'t':'t':'r':'i':'b':'u':'t':'e':rest | end_of_id rest -> > cont (TokenKW TokSpecId_Attribute) rest > _ -> lexError ("unrecognised directive: %" ++ > takeWhile (not.isSpace) s) s > where > end_of_id (c:_) = not (isAlphaNum c) > end_of_id [] = True > lexColon :: (Token -> Pfunc a) -> [Char] -> Int -> ParseResult a > lexColon cont (':':rest) = cont (TokenKW TokDoubleColon) rest > lexColon cont rest = cont (TokenKW TokColon) rest > lexId :: (Token -> Pfunc a) -> Char -> String -> Int -> ParseResult a > lexId cont c rest = > readId rest (\ ident rest' -> cont (TokenInfo (c:ident) TokId) rest') > lexChar :: (Token -> Pfunc a) -> String -> Int -> ParseResult a > lexChar cont rest = lexReadChar rest > (\ ident -> cont (TokenInfo ("'" ++ ident ++ "'") TokId)) > lexString :: (Token -> Pfunc a) -> String -> Int -> ParseResult a > lexString cont rest = lexReadString rest > (\ ident -> cont (TokenInfo ("\"" ++ ident ++ "\"") TokId)) > lexCode :: (Token -> Pfunc a) -> String -> Int -> ParseResult a > lexCode cont rest = lexReadCode rest (0 :: Integer) "" cont > lexNum :: (Token -> Pfunc a) -> Char -> String -> Int -> ParseResult a > lexNum cont c rest = > readNum rest (\ num rest' -> > cont (TokenNum (stringToInt (c:num)) TokNum) rest') > where stringToInt = foldl (\n c' -> digitToInt c' + 10*n) 0 > cleanupCode :: String -> String > cleanupCode s = > dropWhile isSpace (reverse (dropWhile isSpace (reverse s))) This has to match for @}@ that are {\em not} in strings. The code here is a bit tricky, but should work in most cases. > lexReadCode :: (Eq a, Num a) > => String -> a -> String -> (Token -> Pfunc b) -> Int > -> ParseResult b > lexReadCode s n c = case s of > '\n':r -> \cont l -> lexReadCode r n ('\n':c) cont (l+1) > '{' :r -> lexReadCode r (n+1) ('{':c) > '}' :r > | n == 0 -> \cont -> cont (TokenInfo ( > cleanupCode (reverse c)) TokCodeQuote) r > | otherwise -> lexReadCode r (n-1) ('}':c) > '"'{-"-}:r -> lexReadString r (\ str r' -> > lexReadCode r' n ('"' : (reverse str) ++ '"' : c)) > a: '\'':r | isAlphaNum a -> lexReadCode r n ('\'':a:c) > '\'' :r -> lexReadSingleChar r (\ str r' -> > lexReadCode r' n ((reverse str) ++ '\'' : c)) > ch:r -> lexReadCode r n (ch:c) > [] -> \_cont -> lexError "No closing '}' in code segment" [] We need to take similar care when parsing pragmas. > lexPragma :: (Token -> Pfunc a) -> String -> Int -> ParseResult a > lexPragma cont s = lexReadPragma s "" cont > lexReadPragma :: String -> String -> (Token -> Pfunc a) -> Int -> ParseResult a > lexReadPragma s c = case s of > '#':'-':'}':r -> \cont -> cont (TokenInfo (cleanupCode (reverse c)) TokPragmaQuote) r > '\n':r -> \cont l -> lexReadPragma r ('\n':c) cont (l+1) > '"'{-"-}:r -> lexReadString r (\ str r' -> > lexReadPragma r' ('"' : (reverse str) ++ '"' : c)) > a: '\'':r | isAlphaNum a -> lexReadPragma r ('\'':a:c) > '\'' :r -> lexReadSingleChar r (\ str r' -> > lexReadPragma r' ((reverse str) ++ '\'' : c)) > ch:r -> lexReadPragma r (ch:c) > [] -> \_cont -> lexError "No closing '#-}' in code segment" [] ---------------------------------------------------------------------------- Utilities that read the rest of a token. > readId :: String -> (String -> String -> a) -> a > readId (c:r) fn | isIdPart c = readId r (fn . (:) c) > readId r fn = fn [] r > readNum :: String -> (String -> String -> a) -> a > readNum (c:r) fn | isDigit c = readNum r (fn . (:) c) > readNum r fn = fn [] r > isIdPart :: Char -> Bool > isIdPart c = > c >= 'a' && c <= 'z' > || c >= 'A' && c <= 'Z' > || c >= '0' && c <= '9' > || c == '_' > lexReadSingleChar :: String -> (String -> String -> a) -> a > lexReadSingleChar ('\\':c:'\'':r) fn = fn ('\\':c:"'") r > lexReadSingleChar (c:'\'':r) fn = fn (c:"'") r > lexReadSingleChar r fn = fn "" r > lexReadChar :: String -> (String -> String -> a) -> a > lexReadChar ('\'':r) fn = fn "" r > lexReadChar ('\\':'\'':r) fn = lexReadChar r (fn . (:) '\\' . (:) '\'') > lexReadChar ('\\':c:r) fn = lexReadChar r (fn . (:) '\\' . (:) c) > lexReadChar (c:r) fn = lexReadChar r (fn . (:) c) > lexReadChar [] fn = fn "" [] > lexReadString :: String -> (String -> String -> a) -> a > lexReadString ('"'{-"-}:r) fn = fn "" r > lexReadString ('\\':'"':r) fn = lexReadString r (fn . (:) '\\' . (:) '"') > lexReadString ('\\':c:r) fn = lexReadString r (fn . (:) '\\' . (:) c) > lexReadString (c:r) fn = lexReadString r (fn . (:) c) > lexReadString [] fn = fn "" [] > lexError :: String -> String -> Int -> ParseResult a > lexError err = \_ l -> Left (show l ++ ": " ++ err ++ "\n") > lexNestedComment :: Int -> ([Char] -> Int -> ParseResult a) -> [Char] -> Int > -> ParseResult a > lexNestedComment l cont r = > case r of > '-':'}':r' -> cont r' > '{':'-':r' -> \line -> lexNestedComment line > (\r'' -> lexNestedComment l cont r'') r' line > '\n':r' -> \line -> lexNestedComment l cont r' (line+1) > _:r' -> lexNestedComment l cont r' > "" -> \_ -> lexError "unterminated comment" r l happy-lib-2.1.7/frontend/src/Happy/Frontend/Mangler.lhs0000644000000000000000000003235507346545000021152 0ustar0000000000000000/----------------------------------------------------------------------------- The Grammar data type. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- Mangler converts AbsSyn to Grammar > {-# LANGUAGE ScopedTypeVariables #-} > module Happy.Frontend.Mangler (mangler) where > import Happy.Grammar > import Happy.Frontend.AbsSyn > import Happy.Frontend.Mangler.Monad > import Happy.Frontend.AttrGrammar.Mangler > import Happy.Frontend.ParamRules > import Data.Array ( Array, (!), accumArray, array, listArray ) > import Data.Char ( isAlphaNum, isDigit, isLower ) > import Data.List ( zip4, sortBy ) > import Data.Ord > import Control.Monad.Writer ( Writer, mapWriter, runWriter ) ----------------------------------------------------------------------------- -- The Mangler This bit is a real mess, mainly because of the error message support. > mangler :: FilePath -> AbsSyn String -> Either [ErrMsg] (Grammar String, Maybe AttributeGrammarExtras, Directives) > mangler file abssyn@(AbsSyn dirs _) > | null errs = Right (gd, mAg, ps) > | otherwise = Left errs > where mAg = getAttributeGrammarExtras dirs > ((gd, ps), errs) = runWriter (manglerM "no code" checkCode file abssyn) If any attribute directives were used, we are in an attribute grammar, so go do special processing. If not, pass on to the regular processing routine > checkCode :: CodeChecker String > checkCode = case mAg of > Nothing -> \lhs _ code -> > doCheckCode (length lhs) code > Just a -> \lhs nonterm_names code -> > rewriteAttributeGrammar lhs nonterm_names code a > -- | Function to check elimination rules > type CodeChecker e = [Name] -> [Name] -> e -> M (e, [Int]) > manglerM > :: forall e > . e > -- ^ Empty elimination rule, used for starting productions. Will never be run. > -> CodeChecker e > -> FilePath > -> AbsSyn e > -> M (Grammar e, Directives) > manglerM noCode checkCode file (AbsSyn dirs rules') = > -- add filename to all error messages > mapWriter (\(a,e) -> (a, map (\s -> file ++ ": " ++ s) e)) $ do > rules <- case expand_rules rules' of > Left err -> addErr err >> return [] > Right as -> return as > nonterm_strs <- checkRules [n | Rule1 n _ _ <- rules] "" [] > let > terminal_strs = concat (map getTerm dirs) ++ [eofName] > first_nt, first_t, last_start, last_nt, last_t :: Name > first_nt = MkName $ getName firstStartTok + length starts' > first_t = MkName $ getName first_nt + length nonterm_strs > last_start = MkName $ getName first_nt - 1 > last_nt = MkName $ getName first_t - 1 > last_t = MkName $ getName first_t + length terminal_strs - 1 > start_names = [ firstStartTok .. last_start ] > nonterm_names = [ first_nt .. last_nt ] > terminal_names = [ first_t .. last_t ] > starts' = case getParserNames dirs of > [] -> [TokenName "happyParse" Nothing False] > ns -> ns > error_resumptive | ResumptiveErrorHandler{} <- getError dirs = True > | otherwise = False > > start_strs = [ startName++'_':p | (TokenName p _ _) <- starts' ] Build up a mapping from name values to strings. > name_env = (errorTok, errorName) : > (catchTok, catchName) : > (dummyTok, dummyName) : > zip start_names start_strs ++ > zip nonterm_names nonterm_strs ++ > zip terminal_names terminal_strs > lookupName :: String -> [Name] > lookupName n = [ t | (t,r) <- name_env, r == n > , t /= catchTok || error_resumptive ] > -- hide catchName unless %errorresumptive is active > -- issue93.y uses catch as a nonterminal, we should not steal it > mapToName str' = > case lookupName str' of > [a] -> return a > [] -> do addErr ("unknown identifier '" ++ str' ++ "'") > return errorTok -- SG: What a confusing use of errorTok.. Use dummyTok? > (a:_) -> do addErr ("multiple use of '" ++ str' ++ "'") > return a Start symbols... > -- default start token is the first non-terminal in the grammar > lookupStart (TokenName _ Nothing _) = return first_nt > lookupStart (TokenName _ (Just n) _) = mapToName n > lookupStart _ = error "lookupStart: Not a TokenName" > -- in > start_toks <- mapM lookupStart starts' > let > parser_names = [ s | TokenName s _ _ <- starts' ] > start_partials = [ b | TokenName _ _ b <- starts' ] > start_prods = zipWith (\nm tok -> Production nm [tok] (noCode,[]) No) > start_names start_toks Deal with priorities... > priodir = zip [1..] (getPrios dirs) > > mkPrio :: Int -> Directive a -> Priority > mkPrio i (TokenNonassoc _) = Prio None i > mkPrio i (TokenRight _) = Prio RightAssoc i > mkPrio i (TokenLeft _) = Prio LeftAssoc i > mkPrio _ _ = error "Panic: impossible case in mkPrio" > prios = [ (name,mkPrio i dir) > | (i,dir) <- priodir > , nm <- getPrioNames dir > , name <- lookupName nm > ] > prioByString = [ (name, mkPrio i dir) > | (i,dir) <- priodir > , name <- getPrioNames dir > ] Translate the rules from string to name-based. > convNT (Rule1 nt prods ty) > = do nt' <- mapToName nt > return (nt', prods, ty) > > transRule (nt, prods, _ty) > = mapM (finishRule nt) prods > > finishRule :: Name -> Prod1 e -> Writer [ErrMsg] (Production e) > finishRule nt (Prod1 lhs code line prec) > = mapWriter (\(a,e) -> (a, map (addLine line) e)) $ do > lhs' <- mapM mapToName lhs > code' <- checkCode lhs' nonterm_names code > case mkPrec lhs' prec of > Left s -> do addErr ("Undeclared precedence token: " ++ s) > return (Production nt lhs' code' No) > Right p -> return (Production nt lhs' code' p) > > mkPrec :: [Name] -> Prec -> Either String Priority > mkPrec lhs PrecNone = > case filter (flip elem terminal_names) lhs of > [] -> Right No > xs -> case lookup (last xs) prios of > Nothing -> Right No > Just p -> Right p > mkPrec _ (PrecId s) = > case lookup s prioByString of > Nothing -> Left s > Just p -> Right p > > mkPrec _ PrecShift = Right PrioLowest > > -- in > rules1 <- mapM convNT rules > rules2 <- mapM transRule rules1 > let > type_env = [(nt, t) | Rule1 nt _ (Just (t,[])) <- rules] ++ > [(nt, getTokenType dirs) | nt <- terminal_strs] -- XXX: Doesn't handle $$ type! > > fixType (ty,s) = go "" ty > where go acc [] = return (reverse acc) > go acc (c:r) | isLower c = -- look for a run of alphanumerics starting with a lower case letter > let (cs,r1) = span isAlphaNum r > go1 x = go (reverse x ++ acc) r1 > in case lookup (c:cs) s of > Nothing -> go1 (c:cs) -- no binding found > Just a -> case lookup a type_env of > Nothing -> do > addErr ("Parameterized rule argument '" ++ a ++ "' does not have type") > go1 (c:cs) > Just t -> go1 $ "(" ++ t ++ ")" > | otherwise = go (c:acc) r > > convType (nm, t) > = do t' <- fixType t > return (nm, t') > > -- in > tys <- mapM convType [ (nm, t) | (nm, _, Just t) <- rules1 ] > > let > type_array :: Array Name (Maybe String) > type_array = accumArray (\_ x -> x) Nothing (first_nt, last_nt) > [ (nm, Just t) | (nm, t) <- tys ] > env_array :: Array Name String > env_array = array (errorTok, last_t) name_env > -- in Get the token specs in terms of Names. > let > fixTokenSpec (a,b) = do n <- mapToName a; return (n,b) > -- in > tokspec <- mapM fixTokenSpec (getTokenSpec dirs) > let > ass = combinePairs [ (a,no) > | (Production a _ _ _,no) <- zip productions' [0..] ] > arr = array (firstStartTok, MkName $ length ass - 1 + getName firstStartTok) ass > lookup_prods :: Name -> [Int] > lookup_prods x | x >= firstStartTok && x < first_t = arr ! x > lookup_prods _ = error "lookup_prods" > > productions' = start_prods ++ concat rules2 > prod_array = listArray (0,length productions' - 1) productions' > return (Grammar { > productions = productions', > lookupProdNo = (prod_array !), > lookupProdsOfName = lookup_prods, > token_specs = tokspec, > terminals = errorTok : catchTok : terminal_names, > non_terminals = start_names ++ nonterm_names, > -- INCLUDES the %start tokens > starts = zip4 parser_names start_names start_toks > start_partials, > types = type_array, > token_names = env_array, > first_nonterm = first_nt, > first_term = first_t, > eof_term = last terminal_names, > priorities = prios > }, > Directives { > imported_identity = getImportedIdentity dirs, > monad = getMonad dirs, > lexer = getLexer dirs, > error_handler = getError dirs, > error_expected = getErrorExpectedMode dirs, > token_type = getTokenType dirs, > expect = getExpect dirs > }) Gofer-like stuff: > combinePairs :: (Ord a) => [(a,b)] -> [(a,[b])] > combinePairs xs = > combine [ (a,[b]) | (a,b) <- sortBy (comparing fst) xs] > where > combine [] = [] > combine ((a,b):(c,d):r) | a == c = combine ((a,b++d) : r) > combine (a:r) = a : combine r > For combining actions with possible error messages. > addLine :: Int -> String -> String > addLine l s = show l ++ ": " ++ s > getTerm :: Directive a -> [a] > getTerm (TokenSpec stuff) = map fst stuff > getTerm _ = [] So is this. > checkRules :: [String] -> String -> [String] -> Writer [ErrMsg] [String] > checkRules (name:rest) above nonterms > | name == above = checkRules rest name nonterms > | name `elem` nonterms > = do addErr ("Multiple rules for '" ++ name ++ "'") > checkRules rest name nonterms > | otherwise = checkRules rest name (name : nonterms) > checkRules [] _ nonterms = return (reverse nonterms) ----------------------------------------------------------------------------- -- Check for every $i that i is <= the arity of the rule. -- At the same time, we collect a list of the variables actually used in this -- code, which is used by the backend. > doCheckCode :: Int -> String -> M (String, [Int]) > doCheckCode arity code0 = go code0 "" [] > where go code acc used = > case code of > [] -> return (reverse acc, used) > > '"' :r -> case reads code :: [(String,String)] of > [] -> go r ('"':acc) used > (s,r'):_ -> go r' (reverse (show s) ++ acc) used > a:'\'' :r | isAlphaNum a -> go r ('\'':a:acc) used > '\'' :r -> case reads code :: [(Char,String)] of > [] -> go r ('\'':acc) used > (c,r'):_ -> go r' (reverse (show c) ++ acc) used > '\\':'$':r -> go r ('$':acc) used > > '$':'>':r -- the "rightmost token" > | arity == 0 -> do addErr "$> in empty rule" > go r acc used > | otherwise -> go r (reverse (mkHappyVar arity) ++ acc) > (arity : used) > > '$':r@(i:_) | isDigit i -> > case reads r :: [(Int,String)] of > (j,r'):_ -> > if j > arity > then do addErr ('$': show j ++ " out of range") > go r' acc used > else go r' (reverse (mkHappyVar j) ++ acc) > (j : used) > [] -> error "doCheckCode []" > c:r -> go r (c:acc) used > mkHappyVar :: Int -> String > mkHappyVar n = "happy_var_" ++ show n happy-lib-2.1.7/frontend/src/Happy/Frontend/Mangler/0000755000000000000000000000000007346545000020432 5ustar0000000000000000happy-lib-2.1.7/frontend/src/Happy/Frontend/Mangler/Monad.lhs0000644000000000000000000000107107346545000022177 0ustar0000000000000000----------------------------------------------------------------------------- Monad for error handling for the mangler Pulled out so it can be shared with the attribute grammar part of the mangler too. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Happy.Frontend.Mangler.Monad > ( ErrMsg > , M > , addErr > ) where > import Control.Monad.Writer ( Writer, MonadWriter(..) ) > type ErrMsg = String > type M a = Writer [ErrMsg] a > addErr :: ErrMsg -> M () > addErr e = tell [e] happy-lib-2.1.7/frontend/src/Happy/Frontend/ParamRules.hs0000644000000000000000000000776107346545000021467 0ustar0000000000000000module Happy.Frontend.ParamRules(expand_rules, Prod1(..), Rule1(..)) where import Happy.Frontend.AbsSyn import Control.Monad.Writer import Control.Monad.Except(throwError) import Control.Monad.Trans.Except import Data.List(partition,intersperse) import qualified Data.Set as S import qualified Data.Map as M -- XXX: Make it work with old GHC. -- | Desugar parameterized productions into non-parameterized ones -- -- This transformation is fairly straightforward: we walk through every rule -- and collect every possible instantiation of parameterized productions. Then, -- we generate a new non-parametrized rule for each of these. expand_rules :: [Rule e] -> Either String [Rule1 e] expand_rules rs = do let (funs,rs1) = split_rules rs (as,is) <- runM2 (mapM (`inst_rule` []) rs1) bs <- make_insts funs (S.toList is) S.empty return (as++bs) type RuleName = String data Inst = Inst RuleName [RuleName] deriving (Eq, Ord) newtype Funs e = Funs (M.Map RuleName (Rule e)) -- | Similar to 'Rule', but `Term`'s have been flattened into `RuleName`'s data Rule1 e = Rule1 RuleName [Prod1 e] (Maybe (String, Subst)) -- | Similar to 'Prod', but `Term`'s have been flattened into `RuleName`'s data Prod1 e = Prod1 [RuleName] e Int Prec inst_name :: Inst -> RuleName inst_name (Inst f []) = f --inst_name (Inst f xs) = f ++ "(" ++ concat (intersperse "," xs) ++ ")" inst_name (Inst f xs) = f ++ "__" ++ concat (intersperse "__" xs) ++ "__" -- | A renaming substitution used when we instantiate a parameterized rule. type Subst = [(RuleName,RuleName)] type M1 = Writer (S.Set Inst) type M2 = ExceptT String M1 -- | Collects the instances arising from a term. from_term :: Subst -> Term -> M1 RuleName from_term s (App f []) = return $ case lookup f s of Just g -> g Nothing -> f from_term s (App f ts) = do xs <- from_terms s ts let i = Inst f xs tell (S.singleton i) return $ inst_name i -- | Collects the instances arising from a list of terms. from_terms :: Subst -> [Term] -> M1 [RuleName] from_terms s ts = mapM (from_term s) ts -- XXX: perhaps change the line to the line of the instance inst_prod :: Subst -> Prod e -> M1 (Prod1 e) inst_prod s (Prod ts c l p) = do xs <- from_terms s ts return (Prod1 xs c l p) inst_rule :: Rule e -> [RuleName] -> M2 (Rule1 e) inst_rule (Rule x xs ps t) ts = do s <- build xs ts [] ps1 <- lift $ mapM (inst_prod s) ps let y = inst_name (Inst x ts) return (Rule1 y ps1 (fmap (\x' -> (x',s)) t)) where build (x':xs') (t':ts') m = build xs' ts' ((x',t'):m) build [] [] m = return m build xs' [] _ = err ("Need " ++ show (length xs') ++ " more arguments") build _ ts' _ = err (show (length ts') ++ " arguments too many.") err m = throwError ("In " ++ inst_name (Inst x ts) ++ ": " ++ m) make_rule :: Funs e -> Inst -> M2 (Rule1 e) make_rule (Funs funs) (Inst f xs) = case M.lookup f funs of Just r -> inst_rule r xs Nothing -> throwError ("Undefined rule: " ++ f) runM2 :: ExceptT e (Writer w) a -> Either e (a, w) runM2 m = case runWriter (runExceptT m) of (Left e,_) -> Left e (Right a,xs) -> Right (a,xs) make_insts :: Funs e -> [Inst] -> S.Set Inst -> Either String [Rule1 e] make_insts _ [] _ = return [] make_insts funs is done = do (as,ws) <- runM2 (mapM (make_rule funs) is) let done1 = S.union (S.fromList is) done let is1 = filter (not . (`S.member` done1)) (S.toList ws) bs <- make_insts funs is1 done1 return (as++bs) split_rules :: [Rule e] -> (Funs e,[Rule e]) split_rules rs = let (xs,ys) = partition has_args rs in (Funs (M.fromList [ (x,r) | r@(Rule x _ _ _) <- xs ]),ys) where has_args (Rule _ args _ _) = not (null args) happy-lib-2.1.7/frontend/src/Happy/Frontend/ParseMonad.hs0000644000000000000000000000137507346545000021440 0ustar0000000000000000{-# LANGUAGE CPP #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE TypeSynonymInstances #-} #if __GLASGOW_HASKELL__ >= 800 {-# OPTIONS_GHC -Wno-orphans #-} #else {-# OPTIONS_GHC -fno-warn-orphans #-} #endif module Happy.Frontend.ParseMonad where import Control.Monad.Reader import Happy.Frontend.ParseMonad.Class type P = ReaderT (String, Int) ParseResult mkP :: (String -> Int -> ParseResult a) -> P a mkP = ReaderT . uncurry runP :: P a -> String -> Int -> ParseResult a runP f s l = runReaderT f (s, l) instance ParseMonad P where failP mkStr = ReaderT (\(_, l) -> Left $ mkStr l) lineP = asks snd runFromStartP m s l = runP m s l lexTokenP :: HasLexer token => (token -> P r) -> P r lexTokenP k = ReaderT $ uncurry $ lexToken (\t -> runP $ k t) happy-lib-2.1.7/frontend/src/Happy/Frontend/ParseMonad/0000755000000000000000000000000007346545000021076 5ustar0000000000000000happy-lib-2.1.7/frontend/src/Happy/Frontend/ParseMonad/Class.hs0000644000000000000000000000052707346545000022503 0ustar0000000000000000module Happy.Frontend.ParseMonad.Class where type Pfunc a = String -> Int -> ParseResult a class HasLexer token where lexToken :: (token -> Pfunc r) -> Pfunc r type ParseResult = Either String class Monad p => ParseMonad p where failP :: (Int -> String) -> p a lineP :: p Int runFromStartP :: p a -> String -> Int -> ParseResult a happy-lib-2.1.7/frontend/src/Happy/Frontend/Parser.hs0000644000000000000000000022431407346545000020643 0ustar0000000000000000{-# OPTIONS_GHC -w #-} {-# LANGUAGE CPP #-} {-# LANGUAGE MagicHash #-} {-# LANGUAGE BangPatterns #-} {-# LANGUAGE TypeSynonymInstances #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE PatternGuards #-} {-# LANGUAGE NoStrictData #-} {-# LANGUAGE UnboxedTuples #-} {-# LANGUAGE PartialTypeSignatures #-} {-# OPTIONS_GHC -w #-} module Happy.Frontend.Parser (ourParser) where import Happy.Frontend.ParseMonad.Class import Happy.Frontend.ParseMonad import Happy.Frontend.AbsSyn import Happy.Frontend.Lexer import qualified Control.Monad as Happy_Prelude import qualified Data.Bool as Happy_Prelude import qualified Data.Function as Happy_Prelude import qualified Data.Int as Happy_Prelude import qualified Data.List as Happy_Prelude import qualified Data.Maybe as Happy_Prelude import qualified Data.String as Happy_Prelude import qualified Data.Tuple as Happy_Prelude import qualified GHC.Err as Happy_Prelude import qualified GHC.Num as Happy_Prelude import qualified Text.Show as Happy_Prelude import qualified Data.Array as Happy_Data_Array import qualified Data.Bits as Bits import qualified GHC.Exts as Happy_GHC_Exts import Control.Applicative(Applicative(..)) import Control.Monad (ap) -- parser produced by Happy Version 2.1.6 newtype HappyAbsSyn = HappyAbsSyn HappyAny #if __GLASGOW_HASKELL__ >= 607 type HappyAny = Happy_GHC_Exts.Any #else type HappyAny = forall a . a #endif newtype HappyWrap5 = HappyWrap5 (BookendedAbsSyn) happyIn5 :: (BookendedAbsSyn) -> (HappyAbsSyn ) happyIn5 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap5 x) {-# INLINE happyIn5 #-} happyOut5 :: (HappyAbsSyn ) -> HappyWrap5 happyOut5 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut5 #-} newtype HappyWrap6 = HappyWrap6 (AbsSyn String) happyIn6 :: (AbsSyn String) -> (HappyAbsSyn ) happyIn6 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap6 x) {-# INLINE happyIn6 #-} happyOut6 :: (HappyAbsSyn ) -> HappyWrap6 happyOut6 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut6 #-} newtype HappyWrap7 = HappyWrap7 ([Rule String]) happyIn7 :: ([Rule String]) -> (HappyAbsSyn ) happyIn7 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap7 x) {-# INLINE happyIn7 #-} happyOut7 :: (HappyAbsSyn ) -> HappyWrap7 happyOut7 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut7 #-} newtype HappyWrap8 = HappyWrap8 (Rule String) happyIn8 :: (Rule String) -> (HappyAbsSyn ) happyIn8 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap8 x) {-# INLINE happyIn8 #-} happyOut8 :: (HappyAbsSyn ) -> HappyWrap8 happyOut8 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut8 #-} newtype HappyWrap9 = HappyWrap9 ([String]) happyIn9 :: ([String]) -> (HappyAbsSyn ) happyIn9 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap9 x) {-# INLINE happyIn9 #-} happyOut9 :: (HappyAbsSyn ) -> HappyWrap9 happyOut9 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut9 #-} newtype HappyWrap10 = HappyWrap10 ([String]) happyIn10 :: ([String]) -> (HappyAbsSyn ) happyIn10 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap10 x) {-# INLINE happyIn10 #-} happyOut10 :: (HappyAbsSyn ) -> HappyWrap10 happyOut10 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut10 #-} newtype HappyWrap11 = HappyWrap11 ([Prod String]) happyIn11 :: ([Prod String]) -> (HappyAbsSyn ) happyIn11 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap11 x) {-# INLINE happyIn11 #-} happyOut11 :: (HappyAbsSyn ) -> HappyWrap11 happyOut11 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut11 #-} newtype HappyWrap12 = HappyWrap12 (Prod String) happyIn12 :: (Prod String) -> (HappyAbsSyn ) happyIn12 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap12 x) {-# INLINE happyIn12 #-} happyOut12 :: (HappyAbsSyn ) -> HappyWrap12 happyOut12 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut12 #-} newtype HappyWrap13 = HappyWrap13 (Term) happyIn13 :: (Term) -> (HappyAbsSyn ) happyIn13 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap13 x) {-# INLINE happyIn13 #-} happyOut13 :: (HappyAbsSyn ) -> HappyWrap13 happyOut13 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut13 #-} newtype HappyWrap14 = HappyWrap14 ([Term]) happyIn14 :: ([Term]) -> (HappyAbsSyn ) happyIn14 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap14 x) {-# INLINE happyIn14 #-} happyOut14 :: (HappyAbsSyn ) -> HappyWrap14 happyOut14 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut14 #-} newtype HappyWrap15 = HappyWrap15 ([Term]) happyIn15 :: ([Term]) -> (HappyAbsSyn ) happyIn15 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap15 x) {-# INLINE happyIn15 #-} happyOut15 :: (HappyAbsSyn ) -> HappyWrap15 happyOut15 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut15 #-} newtype HappyWrap16 = HappyWrap16 ([Term]) happyIn16 :: ([Term]) -> (HappyAbsSyn ) happyIn16 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap16 x) {-# INLINE happyIn16 #-} happyOut16 :: (HappyAbsSyn ) -> HappyWrap16 happyOut16 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut16 #-} newtype HappyWrap17 = HappyWrap17 (Prec) happyIn17 :: (Prec) -> (HappyAbsSyn ) happyIn17 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap17 x) {-# INLINE happyIn17 #-} happyOut17 :: (HappyAbsSyn ) -> HappyWrap17 happyOut17 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut17 #-} newtype HappyWrap18 = HappyWrap18 ([Directive String]) happyIn18 :: ([Directive String]) -> (HappyAbsSyn ) happyIn18 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap18 x) {-# INLINE happyIn18 #-} happyOut18 :: (HappyAbsSyn ) -> HappyWrap18 happyOut18 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut18 #-} newtype HappyWrap19 = HappyWrap19 (Directive String) happyIn19 :: (Directive String) -> (HappyAbsSyn ) happyIn19 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap19 x) {-# INLINE happyIn19 #-} happyOut19 :: (HappyAbsSyn ) -> HappyWrap19 happyOut19 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut19 #-} newtype HappyWrap20 = HappyWrap20 (Maybe String) happyIn20 :: (Maybe String) -> (HappyAbsSyn ) happyIn20 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap20 x) {-# INLINE happyIn20 #-} happyOut20 :: (HappyAbsSyn ) -> HappyWrap20 happyOut20 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut20 #-} newtype HappyWrap21 = HappyWrap21 ([(String, TokenSpec)]) happyIn21 :: ([(String, TokenSpec)]) -> (HappyAbsSyn ) happyIn21 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap21 x) {-# INLINE happyIn21 #-} happyOut21 :: (HappyAbsSyn ) -> HappyWrap21 happyOut21 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut21 #-} newtype HappyWrap22 = HappyWrap22 ((String, TokenSpec)) happyIn22 :: ((String, TokenSpec)) -> (HappyAbsSyn ) happyIn22 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap22 x) {-# INLINE happyIn22 #-} happyOut22 :: (HappyAbsSyn ) -> HappyWrap22 happyOut22 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut22 #-} newtype HappyWrap23 = HappyWrap23 ([String]) happyIn23 :: ([String]) -> (HappyAbsSyn ) happyIn23 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap23 x) {-# INLINE happyIn23 #-} happyOut23 :: (HappyAbsSyn ) -> HappyWrap23 happyOut23 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut23 #-} newtype HappyWrap24 = HappyWrap24 (Maybe String) happyIn24 :: (Maybe String) -> (HappyAbsSyn ) happyIn24 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap24 x) {-# INLINE happyIn24 #-} happyOut24 :: (HappyAbsSyn ) -> HappyWrap24 happyOut24 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut24 #-} newtype HappyWrap25 = HappyWrap25 (Maybe String) happyIn25 :: (Maybe String) -> (HappyAbsSyn ) happyIn25 x = Happy_GHC_Exts.unsafeCoerce# (HappyWrap25 x) {-# INLINE happyIn25 #-} happyOut25 :: (HappyAbsSyn ) -> HappyWrap25 happyOut25 x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOut25 #-} happyInTok :: (Token) -> (HappyAbsSyn ) happyInTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyInTok #-} happyOutTok :: (HappyAbsSyn ) -> (Token) happyOutTok x = Happy_GHC_Exts.unsafeCoerce# x {-# INLINE happyOutTok #-} {-# NOINLINE happyTokenStrings #-} happyTokenStrings = ["id","spec_tokentype","spec_token","spec_name","spec_partial","spec_lexer","spec_imported_identity","spec_monad","spec_nonassoc","spec_left","spec_right","spec_prec","spec_shift","spec_expect","spec_error","spec_errorexpected","spec_errorhandlertype","spec_attribute","spec_attributetype","pragma","code","int","\":\"","\";\"","\"::\"","\"%%\"","\"|\"","\"(\"","\")\"","\",\"","%eof"] happyActOffsets :: HappyAddr happyActOffsets = HappyA# "\x03\x00\x00\x00\x03\x00\x00\x00\x30\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x1f\x00\x00\x00\x00\x00\x00\x00\x3c\x00\x00\x00\xfe\xff\xff\xff\x00\x00\x00\x00\x3d\x00\x00\x00\x42\x00\x00\x00\x52\x00\x00\x00\x53\x00\x00\x00\x40\x00\x00\x00\x00\x00\x00\x00\x41\x00\x00\x00\x56\x00\x00\x00\x56\x00\x00\x00\x56\x00\x00\x00\x16\x00\x00\x00\x43\x00\x00\x00\x00\x00\x00\x00\x58\x00\x00\x00\x59\x00\x00\x00\x46\x00\x00\x00\x00\x00\x00\x00\x47\x00\x00\x00\x00\x00\x00\x00\x48\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x5d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x4a\x00\x00\x00\x4b\x00\x00\x00\x60\x00\x00\x00\x60\x00\x00\x00\x00\x00\x00\x00\x61\x00\x00\x00\x4f\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x62\x00\x00\x00\x00\x00\x00\x00\x62\x00\x00\x00\x00\x00\x00\x00\x49\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x51\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x54\x00\x00\x00\x2d\x00\x00\x00\x66\x00\x00\x00\x00\x00\x00\x00\x17\x00\x00\x00\x00\x00\x00\x00\x67\x00\x00\x00\x55\x00\x00\x00\x00\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x50\x00\x00\x00\x00\x00\x00\x00\x3b\x00\x00\x00\x6b\x00\x00\x00\x57\x00\x00\x00\x00\x00\x00\x00\x6c\x00\x00\x00\x00\x00\x00\x00\x6d\x00\x00\x00\x00\x00\x00\x00\x5a\x00\x00\x00\x6f\x00\x00\x00\x00\x00\x00\x00\x70\x00\x00\x00\x5b\x00\x00\x00\x73\x00\x00\x00\x00\x00\x00\x00\x73\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x5e\x00\x00\x00\x00\x00\x00\x00\x2e\x00\x00\x00\x00\x00\x00\x00\x74\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyGotoOffsets :: HappyAddr happyGotoOffsets = HappyA# "\x0c\x00\x00\x00\x64\x00\x00\x00\x68\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2b\x00\x00\x00\x00\x00\x00\x00\x69\x00\x00\x00\x71\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x3a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x6e\x00\x00\x00\x72\x00\x00\x00\x75\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x6a\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x76\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x77\x00\x00\x00\x7a\x00\x00\x00\x00\x00\x00\x00\x3e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x4e\x00\x00\x00\x00\x00\x00\x00\x7b\x00\x00\x00\x00\x00\x00\x00\x7d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x7e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x79\x00\x00\x00\x82\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x2c\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00\x34\x00\x00\x00\x00\x00\x00\x00\x39\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x83\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyDefActions :: HappyAddr happyDefActions = HappyA# "\xc5\xff\xff\xff\x00\x00\x00\x00\xc7\xff\xff\xff\xc6\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xc8\xff\xff\xff\xc7\xff\xff\xff\x00\x00\x00\x00\xe3\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xde\xff\xff\xff\x00\x00\x00\x00\xc9\xff\xff\xff\xc9\xff\xff\xff\xc9\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xd3\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xd1\xff\xff\xff\x00\x00\x00\x00\xd2\xff\xff\xff\xc7\xff\xff\xff\xd5\xff\xff\xff\xd7\xff\xff\xff\xc9\xff\xff\xff\xd6\xff\xff\xff\xd8\xff\xff\xff\xdc\xff\xff\xff\x00\x00\x00\x00\xce\xff\xff\xff\xce\xff\xff\xff\xe1\xff\xff\xff\xcc\xff\xff\xff\x00\x00\x00\x00\xe2\xff\xff\xff\xe4\xff\xff\xff\x00\x00\x00\x00\xfe\xff\xff\xff\xfd\xff\xff\xff\xfb\xff\xff\xff\xf6\xff\xff\xff\xcb\xff\xff\xff\xcd\xff\xff\xff\xe0\xff\xff\xff\xcf\xff\xff\xff\xdf\xff\xff\xff\xdd\xff\xff\xff\xdb\xff\xff\xff\xca\xff\xff\xff\xd4\xff\xff\xff\xd0\xff\xff\xff\xda\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xfc\xff\xff\xff\x00\x00\x00\x00\xf5\xff\xff\xff\xec\xff\xff\xff\x00\x00\x00\x00\xd9\xff\xff\xff\x00\x00\x00\x00\xf8\xff\xff\xff\xf2\xff\xff\xff\xeb\xff\xff\xff\xe5\xff\xff\xff\xed\xff\xff\xff\xef\xff\xff\xff\xf7\xff\xff\xff\x00\x00\x00\x00\xf4\xff\xff\xff\x00\x00\x00\x00\xea\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xe6\xff\xff\xff\xec\xff\xff\xff\x00\x00\x00\x00\xec\xff\xff\xff\xfa\xff\xff\xff\xec\xff\xff\xff\xf3\xff\xff\xff\xe7\xff\xff\xff\xf0\xff\xff\xff\xe9\xff\xff\xff\x00\x00\x00\x00\xee\xff\xff\xff\x00\x00\x00\x00\xf1\xff\xff\xff\xf9\xff\xff\xff\xe8\xff\xff\xff"# happyCheck :: HappyAddr happyCheck = HappyA# "\xff\xff\xff\xff\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x0b\x00\x00\x00\x0c\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x0f\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x15\x00\x00\x00\x1b\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x20\x00\x00\x00\x14\x00\x00\x00\x18\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x05\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x0b\x00\x00\x00\x0c\x00\x00\x00\x01\x00\x00\x00\x17\x00\x00\x00\x0f\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x08\x00\x00\x00\x1e\x00\x00\x00\x1f\x00\x00\x00\x0b\x00\x00\x00\x0d\x00\x00\x00\x0e\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x06\x00\x00\x00\x07\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x0a\x00\x00\x00\x02\x00\x00\x00\x18\x00\x00\x00\x16\x00\x00\x00\x1a\x00\x00\x00\x0d\x00\x00\x00\x0e\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x1e\x00\x00\x00\x1f\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x02\x00\x00\x00\x03\x00\x00\x00\x16\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x16\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x1d\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x16\x00\x00\x00\x1c\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x16\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x18\x00\x00\x00\x1d\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x19\x00\x00\x00\x14\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\x13\x00\x00\x00\x13\x00\x00\x00\x13\x00\x00\x00\x03\x00\x00\x00\x0e\x00\x00\x00\x12\x00\x00\x00\x04\x00\x00\x00\xff\xff\xff\xff\x05\x00\x00\x00\x12\x00\x00\x00\x0c\x00\x00\x00\x0f\x00\x00\x00\x12\x00\x00\x00\x12\x00\x00\x00\x0f\x00\x00\x00\x08\x00\x00\x00\x08\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff"# happyTable :: HappyAddr happyTable = HappyA# "\x00\x00\x00\x00\x0b\x00\x00\x00\x0c\x00\x00\x00\x0d\x00\x00\x00\x0e\x00\x00\x00\x0f\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x55\x00\x00\x00\x04\x00\x00\x00\x15\x00\x00\x00\x16\x00\x00\x00\x17\x00\x00\x00\x18\x00\x00\x00\x19\x00\x00\x00\x1a\x00\x00\x00\x45\x00\x00\x00\x46\x00\x00\x00\x47\x00\x00\x00\x48\x00\x00\x00\x49\x00\x00\x00\x04\x00\x00\x00\x2d\x00\x00\x00\x58\x00\x00\x00\x46\x00\x00\x00\x47\x00\x00\x00\x48\x00\x00\x00\x49\x00\x00\x00\xff\xff\xff\xff\x02\x00\x00\x00\x56\x00\x00\x00\x0b\x00\x00\x00\x0c\x00\x00\x00\x0d\x00\x00\x00\x0e\x00\x00\x00\x0f\x00\x00\x00\x10\x00\x00\x00\x11\x00\x00\x00\x12\x00\x00\x00\x13\x00\x00\x00\x14\x00\x00\x00\x07\x00\x00\x00\x1f\x00\x00\x00\x15\x00\x00\x00\x16\x00\x00\x00\x17\x00\x00\x00\x18\x00\x00\x00\x19\x00\x00\x00\x1a\x00\x00\x00\x5b\x00\x00\x00\x4c\x00\x00\x00\x4d\x00\x00\x00\x5c\x00\x00\x00\x08\x00\x00\x00\x09\x00\x00\x00\x56\x00\x00\x00\x46\x00\x00\x00\x47\x00\x00\x00\x48\x00\x00\x00\x49\x00\x00\x00\x60\x00\x00\x00\x46\x00\x00\x00\x47\x00\x00\x00\x48\x00\x00\x00\x49\x00\x00\x00\x2a\x00\x00\x00\x42\x00\x00\x00\x07\x00\x00\x00\x43\x00\x00\x00\x52\x00\x00\x00\x53\x00\x00\x00\x27\x00\x00\x00\x28\x00\x00\x00\x5e\x00\x00\x00\x5f\x00\x00\x00\x32\x00\x00\x00\x28\x00\x00\x00\x2e\x00\x00\x00\x2f\x00\x00\x00\x07\x00\x00\x00\x2b\x00\x00\x00\x27\x00\x00\x00\x26\x00\x00\x00\x25\x00\x00\x00\x24\x00\x00\x00\x21\x00\x00\x00\x1e\x00\x00\x00\x1d\x00\x00\x00\x1c\x00\x00\x00\x1b\x00\x00\x00\x3b\x00\x00\x00\x07\x00\x00\x00\x21\x00\x00\x00\x38\x00\x00\x00\x37\x00\x00\x00\x35\x00\x00\x00\x2a\x00\x00\x00\x31\x00\x00\x00\x32\x00\x00\x00\x3e\x00\x00\x00\x3c\x00\x00\x00\x41\x00\x00\x00\x4b\x00\x00\x00\x44\x00\x00\x00\x45\x00\x00\x00\x54\x00\x00\x00\x4b\x00\x00\x00\x4e\x00\x00\x00\x4b\x00\x00\x00\x5b\x00\x00\x00\x5a\x00\x00\x00\x4b\x00\x00\x00\x58\x00\x00\x00\x4f\x00\x00\x00\x4b\x00\x00\x00\x4b\x00\x00\x00\x60\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x2d\x00\x00\x00\x39\x00\x00\x00\x3e\x00\x00\x00\x2b\x00\x00\x00\x22\x00\x00\x00\x3c\x00\x00\x00\x00\x00\x00\x00\x3f\x00\x00\x00\x21\x00\x00\x00\x50\x00\x00\x00\x35\x00\x00\x00\x1f\x00\x00\x00\x38\x00\x00\x00\x33\x00\x00\x00\x4f\x00\x00\x00\x61\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"# happyReduceArr = Happy_Data_Array.array (1, 58) [ (1 , happyReduce_1), (2 , happyReduce_2), (3 , happyReduce_3), (4 , happyReduce_4), (5 , happyReduce_5), (6 , happyReduce_6), (7 , happyReduce_7), (8 , happyReduce_8), (9 , happyReduce_9), (10 , happyReduce_10), (11 , happyReduce_11), (12 , happyReduce_12), (13 , happyReduce_13), (14 , happyReduce_14), (15 , happyReduce_15), (16 , happyReduce_16), (17 , happyReduce_17), (18 , happyReduce_18), (19 , happyReduce_19), (20 , happyReduce_20), (21 , happyReduce_21), (22 , happyReduce_22), (23 , happyReduce_23), (24 , happyReduce_24), (25 , happyReduce_25), (26 , happyReduce_26), (27 , happyReduce_27), (28 , happyReduce_28), (29 , happyReduce_29), (30 , happyReduce_30), (31 , happyReduce_31), (32 , happyReduce_32), (33 , happyReduce_33), (34 , happyReduce_34), (35 , happyReduce_35), (36 , happyReduce_36), (37 , happyReduce_37), (38 , happyReduce_38), (39 , happyReduce_39), (40 , happyReduce_40), (41 , happyReduce_41), (42 , happyReduce_42), (43 , happyReduce_43), (44 , happyReduce_44), (45 , happyReduce_45), (46 , happyReduce_46), (47 , happyReduce_47), (48 , happyReduce_48), (49 , happyReduce_49), (50 , happyReduce_50), (51 , happyReduce_51), (52 , happyReduce_52), (53 , happyReduce_53), (54 , happyReduce_54), (55 , happyReduce_55), (56 , happyReduce_56), (57 , happyReduce_57), (58 , happyReduce_58) ] happyRuleArr :: HappyAddr happyRuleArr = HappyA# "\x00\x00\x00\x00\x04\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x02\x00\x00\x00\x01\x00\x00\x00\x03\x00\x00\x00\x06\x00\x00\x00\x03\x00\x00\x00\x07\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x04\x00\x00\x00\x03\x00\x00\x00\x04\x00\x00\x00\x00\x00\x00\x00\x05\x00\x00\x00\x01\x00\x00\x00\x05\x00\x00\x00\x03\x00\x00\x00\x06\x00\x00\x00\x03\x00\x00\x00\x06\x00\x00\x00\x01\x00\x00\x00\x07\x00\x00\x00\x04\x00\x00\x00\x07\x00\x00\x00\x03\x00\x00\x00\x08\x00\x00\x00\x01\x00\x00\x00\x08\x00\x00\x00\x04\x00\x00\x00\x09\x00\x00\x00\x01\x00\x00\x00\x09\x00\x00\x00\x00\x00\x00\x00\x0a\x00\x00\x00\x01\x00\x00\x00\x0a\x00\x00\x00\x02\x00\x00\x00\x0b\x00\x00\x00\x01\x00\x00\x00\x0b\x00\x00\x00\x03\x00\x00\x00\x0c\x00\x00\x00\x02\x00\x00\x00\x0c\x00\x00\x00\x01\x00\x00\x00\x0c\x00\x00\x00\x00\x00\x00\x00\x0d\x00\x00\x00\x02\x00\x00\x00\x0d\x00\x00\x00\x01\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x03\x00\x00\x00\x0e\x00\x00\x00\x03\x00\x00\x00\x0e\x00\x00\x00\x01\x00\x00\x00\x0e\x00\x00\x00\x03\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x03\x00\x00\x00\x0e\x00\x00\x00\x04\x00\x00\x00\x0e\x00\x00\x00\x05\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x03\x00\x00\x00\x0e\x00\x00\x00\x01\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x02\x00\x00\x00\x0e\x00\x00\x00\x03\x00\x00\x00\x0f\x00\x00\x00\x01\x00\x00\x00\x0f\x00\x00\x00\x00\x00\x00\x00\x10\x00\x00\x00\x02\x00\x00\x00\x10\x00\x00\x00\x01\x00\x00\x00\x11\x00\x00\x00\x02\x00\x00\x00\x12\x00\x00\x00\x02\x00\x00\x00\x12\x00\x00\x00\x00\x00\x00\x00\x13\x00\x00\x00\x01\x00\x00\x00\x13\x00\x00\x00\x00\x00\x00\x00\x14\x00\x00\x00\x01\x00\x00\x00\x14\x00\x00\x00\x00\x00\x00\x00"# happyCatchStates :: [Happy_Prelude.Int] happyCatchStates = [] happy_n_terms = 33 :: Happy_Prelude.Int happy_n_nonterms = 21 :: Happy_Prelude.Int happy_n_starts = 1 :: Happy_Prelude.Int happyReduce_1 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_1 = happyReduce 4# 0# happyReduction_1 happyReduction_1 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOut25 happy_x_1 of { (HappyWrap25 happy_var_1) -> case happyOut24 happy_x_2 of { (HappyWrap24 happy_var_2) -> case happyOut6 happy_x_3 of { (HappyWrap6 happy_var_3) -> case happyOut24 happy_x_4 of { (HappyWrap24 happy_var_4) -> happyIn5 (BookendedAbsSyn happy_var_1 happy_var_2 happy_var_3 happy_var_4 ) `HappyStk` happyRest}}}} happyReduce_2 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_2 = happySpecReduce_3 1# happyReduction_2 happyReduction_2 happy_x_3 happy_x_2 happy_x_1 = case happyOut18 happy_x_1 of { (HappyWrap18 happy_var_1) -> case happyOut7 happy_x_3 of { (HappyWrap7 happy_var_3) -> happyIn6 (AbsSyn (reverse happy_var_1) (reverse happy_var_3) )}} happyReduce_3 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_3 = happySpecReduce_2 2# happyReduction_3 happyReduction_3 happy_x_2 happy_x_1 = case happyOut7 happy_x_1 of { (HappyWrap7 happy_var_1) -> case happyOut8 happy_x_2 of { (HappyWrap8 happy_var_2) -> happyIn7 (happy_var_2 : happy_var_1 )}} happyReduce_4 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_4 = happySpecReduce_1 2# happyReduction_4 happyReduction_4 happy_x_1 = case happyOut8 happy_x_1 of { (HappyWrap8 happy_var_1) -> happyIn7 ([happy_var_1] )} happyReduce_5 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_5 = happyReduce 6# 3# happyReduction_5 happyReduction_5 (happy_x_6 `HappyStk` happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> case happyOut11 happy_x_6 of { (HappyWrap11 happy_var_6) -> happyIn8 (Rule happy_var_1 happy_var_2 happy_var_6 (Just happy_var_4) ) `HappyStk` happyRest}}}} happyReduce_6 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_6 = happyReduce 7# 3# happyReduction_6 happyReduction_6 (happy_x_7 `HappyStk` happy_x_6 `HappyStk` happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> case happyOut11 happy_x_7 of { (HappyWrap11 happy_var_7) -> happyIn8 (Rule happy_var_1 happy_var_2 happy_var_7 (Just happy_var_4) ) `HappyStk` happyRest}}}} happyReduce_7 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_7 = happyReduce 4# 3# happyReduction_7 happyReduction_7 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut9 happy_x_2 of { (HappyWrap9 happy_var_2) -> case happyOut11 happy_x_4 of { (HappyWrap11 happy_var_4) -> happyIn8 (Rule happy_var_1 happy_var_2 happy_var_4 Nothing ) `HappyStk` happyRest}}} happyReduce_8 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_8 = happySpecReduce_3 4# happyReduction_8 happyReduction_8 happy_x_3 happy_x_2 happy_x_1 = case happyOut10 happy_x_2 of { (HappyWrap10 happy_var_2) -> happyIn9 (reverse happy_var_2 )} happyReduce_9 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_9 = happySpecReduce_0 4# happyReduction_9 happyReduction_9 = happyIn9 ([] ) happyReduce_10 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_10 = happySpecReduce_1 5# happyReduction_10 happyReduction_10 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> happyIn10 ([happy_var_1] )} happyReduce_11 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_11 = happySpecReduce_3 5# happyReduction_11 happyReduction_11 happy_x_3 happy_x_2 happy_x_1 = case happyOut10 happy_x_1 of { (HappyWrap10 happy_var_1) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokId) -> happyIn10 (happy_var_3 : happy_var_1 )}} happyReduce_12 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_12 = happySpecReduce_3 6# happyReduction_12 happyReduction_12 happy_x_3 happy_x_2 happy_x_1 = case happyOut12 happy_x_1 of { (HappyWrap12 happy_var_1) -> case happyOut11 happy_x_3 of { (HappyWrap11 happy_var_3) -> happyIn11 (happy_var_1 : happy_var_3 )}} happyReduce_13 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_13 = happySpecReduce_1 6# happyReduction_13 happyReduction_13 happy_x_1 = case happyOut12 happy_x_1 of { (HappyWrap12 happy_var_1) -> happyIn11 ([happy_var_1] )} happyReduce_14 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_14 = happyMonadReduce 4# 7# happyReduction_14 happyReduction_14 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) tk = happyThen ((case happyOut14 happy_x_1 of { (HappyWrap14 happy_var_1) -> case happyOut17 happy_x_2 of { (HappyWrap17 happy_var_2) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> ( lineP >>= \l -> return (Prod happy_var_1 happy_var_3 l happy_var_2))}}}) ) (\r -> happyReturn (happyIn12 r)) happyReduce_15 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_15 = happyMonadReduce 3# 7# happyReduction_15 happyReduction_15 (happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) tk = happyThen ((case happyOut14 happy_x_1 of { (HappyWrap14 happy_var_1) -> case happyOut17 happy_x_2 of { (HappyWrap17 happy_var_2) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> ( lineP >>= \l -> return (Prod happy_var_1 happy_var_3 l happy_var_2))}}}) ) (\r -> happyReturn (happyIn12 r)) happyReduce_16 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_16 = happySpecReduce_1 8# happyReduction_16 happyReduction_16 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> happyIn13 (App happy_var_1 [] )} happyReduce_17 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_17 = happyReduce 4# 8# happyReduction_17 happyReduction_17 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut16 happy_x_3 of { (HappyWrap16 happy_var_3) -> happyIn13 (App happy_var_1 (reverse happy_var_3) ) `HappyStk` happyRest}} happyReduce_18 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_18 = happySpecReduce_1 9# happyReduction_18 happyReduction_18 happy_x_1 = case happyOut15 happy_x_1 of { (HappyWrap15 happy_var_1) -> happyIn14 (reverse happy_var_1 )} happyReduce_19 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_19 = happySpecReduce_0 9# happyReduction_19 happyReduction_19 = happyIn14 ([] ) happyReduce_20 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_20 = happySpecReduce_1 10# happyReduction_20 happyReduction_20 happy_x_1 = case happyOut13 happy_x_1 of { (HappyWrap13 happy_var_1) -> happyIn15 ([happy_var_1] )} happyReduce_21 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_21 = happySpecReduce_2 10# happyReduction_21 happyReduction_21 happy_x_2 happy_x_1 = case happyOut15 happy_x_1 of { (HappyWrap15 happy_var_1) -> case happyOut13 happy_x_2 of { (HappyWrap13 happy_var_2) -> happyIn15 (happy_var_2 : happy_var_1 )}} happyReduce_22 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_22 = happySpecReduce_1 11# happyReduction_22 happyReduction_22 happy_x_1 = case happyOut13 happy_x_1 of { (HappyWrap13 happy_var_1) -> happyIn16 ([happy_var_1] )} happyReduce_23 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_23 = happySpecReduce_3 11# happyReduction_23 happyReduction_23 happy_x_3 happy_x_2 happy_x_1 = case happyOut16 happy_x_1 of { (HappyWrap16 happy_var_1) -> case happyOut13 happy_x_3 of { (HappyWrap13 happy_var_3) -> happyIn16 (happy_var_3 : happy_var_1 )}} happyReduce_24 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_24 = happySpecReduce_2 12# happyReduction_24 happyReduction_24 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> happyIn17 (PrecId happy_var_2 )} happyReduce_25 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_25 = happySpecReduce_1 12# happyReduction_25 happyReduction_25 happy_x_1 = happyIn17 (PrecShift ) happyReduce_26 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_26 = happySpecReduce_0 12# happyReduction_26 happyReduction_26 = happyIn17 (PrecNone ) happyReduce_27 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_27 = happySpecReduce_2 13# happyReduction_27 happyReduction_27 happy_x_2 happy_x_1 = case happyOut18 happy_x_1 of { (HappyWrap18 happy_var_1) -> case happyOut19 happy_x_2 of { (HappyWrap19 happy_var_2) -> happyIn18 (happy_var_2 : happy_var_1 )}} happyReduce_28 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_28 = happySpecReduce_1 13# happyReduction_28 happyReduction_28 happy_x_1 = case happyOut19 happy_x_1 of { (HappyWrap19 happy_var_1) -> happyIn18 ([happy_var_1] )} happyReduce_29 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_29 = happySpecReduce_2 14# happyReduction_29 happyReduction_29 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn19 (TokenType happy_var_2 )} happyReduce_30 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_30 = happySpecReduce_2 14# happyReduction_30 happyReduction_30 happy_x_2 happy_x_1 = case happyOut21 happy_x_2 of { (HappyWrap21 happy_var_2) -> happyIn19 (TokenSpec happy_var_2 )} happyReduce_31 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_31 = happySpecReduce_3 14# happyReduction_31 happyReduction_31 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> case happyOut20 happy_x_3 of { (HappyWrap20 happy_var_3) -> happyIn19 (TokenName happy_var_2 happy_var_3 False )}} happyReduce_32 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_32 = happySpecReduce_3 14# happyReduction_32 happyReduction_32 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> case happyOut20 happy_x_3 of { (HappyWrap20 happy_var_3) -> happyIn19 (TokenName happy_var_2 happy_var_3 True )}} happyReduce_33 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_33 = happySpecReduce_1 14# happyReduction_33 happyReduction_33 happy_x_1 = happyIn19 (TokenImportedIdentity ) happyReduce_34 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_34 = happySpecReduce_3 14# happyReduction_34 happyReduction_34 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> happyIn19 (TokenLexer happy_var_2 happy_var_3 )}} happyReduce_35 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_35 = happySpecReduce_2 14# happyReduction_35 happyReduction_35 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn19 (TokenMonad "()" happy_var_2 "Happy_Prelude.>>=" "Happy_Prelude.return" )} happyReduce_36 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_36 = happySpecReduce_3 14# happyReduction_36 happyReduction_36 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> happyIn19 (TokenMonad happy_var_2 happy_var_3 "Happy_Prelude.>>=" "Happy_Prelude.return" )}} happyReduce_37 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_37 = happyReduce 4# 14# happyReduction_37 happyReduction_37 (happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> happyIn19 (TokenMonad "()" happy_var_2 happy_var_3 happy_var_4 ) `HappyStk` happyRest}}} happyReduce_38 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_38 = happyReduce 5# 14# happyReduction_38 happyReduction_38 (happy_x_5 `HappyStk` happy_x_4 `HappyStk` happy_x_3 `HappyStk` happy_x_2 `HappyStk` happy_x_1 `HappyStk` happyRest) = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> case happyOutTok happy_x_4 of { (TokenInfo happy_var_4 TokCodeQuote) -> case happyOutTok happy_x_5 of { (TokenInfo happy_var_5 TokCodeQuote) -> happyIn19 (TokenMonad happy_var_2 happy_var_3 happy_var_4 happy_var_5 ) `HappyStk` happyRest}}}} happyReduce_39 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_39 = happySpecReduce_2 14# happyReduction_39 happyReduction_39 happy_x_2 happy_x_1 = case happyOut23 happy_x_2 of { (HappyWrap23 happy_var_2) -> happyIn19 (TokenNonassoc happy_var_2 )} happyReduce_40 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_40 = happySpecReduce_2 14# happyReduction_40 happyReduction_40 happy_x_2 happy_x_1 = case happyOut23 happy_x_2 of { (HappyWrap23 happy_var_2) -> happyIn19 (TokenRight happy_var_2 )} happyReduce_41 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_41 = happySpecReduce_2 14# happyReduction_41 happyReduction_41 happy_x_2 happy_x_1 = case happyOut23 happy_x_2 of { (HappyWrap23 happy_var_2) -> happyIn19 (TokenLeft happy_var_2 )} happyReduce_42 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_42 = happySpecReduce_2 14# happyReduction_42 happyReduction_42 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenNum happy_var_2 TokNum) -> happyIn19 (TokenExpect happy_var_2 )} happyReduce_43 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_43 = happySpecReduce_3 14# happyReduction_43 happyReduction_43 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> case happyOut24 happy_x_3 of { (HappyWrap24 happy_var_3) -> happyIn19 (TokenError happy_var_2 happy_var_3 )}} happyReduce_44 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_44 = happySpecReduce_1 14# happyReduction_44 happyReduction_44 happy_x_1 = happyIn19 (TokenErrorExpected ) happyReduce_45 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_45 = happySpecReduce_2 14# happyReduction_45 happyReduction_45 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> happyIn19 (TokenErrorHandlerType happy_var_2 )} happyReduce_46 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_46 = happySpecReduce_2 14# happyReduction_46 happyReduction_46 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn19 (TokenAttributetype happy_var_2 )} happyReduce_47 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_47 = happySpecReduce_3 14# happyReduction_47 happyReduction_47 happy_x_3 happy_x_2 happy_x_1 = case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokId) -> case happyOutTok happy_x_3 of { (TokenInfo happy_var_3 TokCodeQuote) -> happyIn19 (TokenAttribute happy_var_2 happy_var_3 )}} happyReduce_48 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_48 = happySpecReduce_1 15# happyReduction_48 happyReduction_48 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> happyIn20 (Just happy_var_1 )} happyReduce_49 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_49 = happySpecReduce_0 15# happyReduction_49 happyReduction_49 = happyIn20 (Nothing ) happyReduce_50 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_50 = happySpecReduce_2 16# happyReduction_50 happyReduction_50 happy_x_2 happy_x_1 = case happyOut22 happy_x_1 of { (HappyWrap22 happy_var_1) -> case happyOut21 happy_x_2 of { (HappyWrap21 happy_var_2) -> happyIn21 (happy_var_1:happy_var_2 )}} happyReduce_51 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_51 = happySpecReduce_1 16# happyReduction_51 happyReduction_51 happy_x_1 = case happyOut22 happy_x_1 of { (HappyWrap22 happy_var_1) -> happyIn21 ([happy_var_1] )} happyReduce_52 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_52 = happySpecReduce_2 17# happyReduction_52 happyReduction_52 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOutTok happy_x_2 of { (TokenInfo happy_var_2 TokCodeQuote) -> happyIn22 ((happy_var_1, parseTokenSpec happy_var_2) )}} happyReduce_53 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_53 = happySpecReduce_2 18# happyReduction_53 happyReduction_53 happy_x_2 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokId) -> case happyOut23 happy_x_2 of { (HappyWrap23 happy_var_2) -> happyIn23 (happy_var_1 : happy_var_2 )}} happyReduce_54 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_54 = happySpecReduce_0 18# happyReduction_54 happyReduction_54 = happyIn23 ([] ) happyReduce_55 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_55 = happySpecReduce_1 19# happyReduction_55 happyReduction_55 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokCodeQuote) -> happyIn24 (Just happy_var_1 )} happyReduce_56 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_56 = happySpecReduce_0 19# happyReduction_56 happyReduction_56 = happyIn24 (Nothing ) happyReduce_57 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_57 = happySpecReduce_1 20# happyReduction_57 happyReduction_57 happy_x_1 = case happyOutTok happy_x_1 of { (TokenInfo happy_var_1 TokPragmaQuote) -> happyIn25 (Just happy_var_1 )} happyReduce_58 :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> P (HappyAbsSyn ) happyReduce_58 = happySpecReduce_0 20# happyReduction_58 happyReduction_58 = happyIn25 (Nothing ) happyTerminalToTok term = case term of { TokenEOF -> 32#; TokenInfo happy_dollar_dollar TokId -> 2#; TokenKW TokSpecId_TokenType -> 3#; TokenKW TokSpecId_Token -> 4#; TokenKW TokSpecId_Name -> 5#; TokenKW TokSpecId_Partial -> 6#; TokenKW TokSpecId_Lexer -> 7#; TokenKW TokSpecId_ImportedIdentity -> 8#; TokenKW TokSpecId_Monad -> 9#; TokenKW TokSpecId_Nonassoc -> 10#; TokenKW TokSpecId_Left -> 11#; TokenKW TokSpecId_Right -> 12#; TokenKW TokSpecId_Prec -> 13#; TokenKW TokSpecId_Shift -> 14#; TokenKW TokSpecId_Expect -> 15#; TokenKW TokSpecId_Error -> 16#; TokenKW TokSpecId_ErrorExpected -> 17#; TokenKW TokSpecId_ErrorHandlerType -> 18#; TokenKW TokSpecId_Attribute -> 19#; TokenKW TokSpecId_Attributetype -> 20#; TokenInfo happy_dollar_dollar TokPragmaQuote -> 21#; TokenInfo happy_dollar_dollar TokCodeQuote -> 22#; TokenNum happy_dollar_dollar TokNum -> 23#; TokenKW TokColon -> 24#; TokenKW TokSemiColon -> 25#; TokenKW TokDoubleColon -> 26#; TokenKW TokDoublePercent -> 27#; TokenKW TokBar -> 28#; TokenKW TokParenL -> 29#; TokenKW TokParenR -> 30#; TokenKW TokComma -> 31#; _ -> -1#; } {-# NOINLINE happyTerminalToTok #-} happyLex kend kmore = lexTokenP (\tk -> case tk of { TokenEOF -> kend tk; _ -> kmore (happyTerminalToTok tk) tk }) {-# INLINE happyLex #-} happyNewToken action sts stk = happyLex (\tk -> happyDoAction 32# tk action sts stk) (\i tk -> happyDoAction i tk action sts stk) happyReport 32# = happyReport' happyReport _ = happyReport' happyThen :: () => (P a) -> (a -> (P b)) -> (P b) happyThen = (Happy_Prelude.>>=) happyReturn :: () => a -> (P a) happyReturn = (Happy_Prelude.return) happyParse :: () => Happy_GHC_Exts.Int# -> P (HappyAbsSyn ) happyNewToken :: () => Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> (P (HappyAbsSyn )) happyDoAction :: () => Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> (P (HappyAbsSyn )) happyReduceArr :: () => Happy_Data_Array.Array Happy_Prelude.Int (Happy_GHC_Exts.Int# -> Token -> Happy_GHC_Exts.Int# -> Happy_IntList -> HappyStk (HappyAbsSyn ) -> (P (HappyAbsSyn ))) happyThen1 :: () => P a -> (a -> P b) -> P b happyThen1 = happyThen happyFmap1 f m = happyThen m (\a -> happyReturn (f a)) happyReturn1 :: () => a -> (P a) happyReturn1 = happyReturn happyReport' :: () => (Token) -> [Happy_Prelude.String] -> (P a) -> (P a) happyReport' = (\tokens expected resume -> happyError) happyAbort :: () => (P a) happyAbort = Happy_Prelude.error "Called abort handler in non-resumptive parser" ourParser = happySomeParser where happySomeParser = happyThen (happyParse 0#) (\x -> happyReturn (let {(HappyWrap5 x') = happyOut5 x} in x')) happySeq = happyDontSeq happyError :: P a happyError = failP (\l -> show l ++ ": Parse error\n") #define HAPPY_COERCE 1 -- $Id: GenericTemplate.hs,v 1.26 2005/01/14 14:47:22 simonmar Exp $ #if !defined(__GLASGOW_HASKELL__) # error This code isn't being built with GHC. #endif -- Get WORDS_BIGENDIAN (if defined) #include "MachDeps.h" -- Do not remove this comment. Required to fix CPP parsing when using GCC and a clang-compiled alex. #define LT(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.<# m)) :: Happy_Prelude.Bool) #define GTE(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.>=# m)) :: Happy_Prelude.Bool) #define EQ(n,m) ((Happy_GHC_Exts.tagToEnum# (n Happy_GHC_Exts.==# m)) :: Happy_Prelude.Bool) #define PLUS(n,m) (n Happy_GHC_Exts.+# m) #define MINUS(n,m) (n Happy_GHC_Exts.-# m) #define TIMES(n,m) (n Happy_GHC_Exts.*# m) #define NEGATE(n) (Happy_GHC_Exts.negateInt# (n)) type Happy_Int = Happy_GHC_Exts.Int# data Happy_IntList = HappyCons Happy_Int Happy_IntList #define INVALID_TOK -1# #define ERROR_TOK 0# #define CATCH_TOK 1# #if defined(HAPPY_COERCE) # define GET_ERROR_TOKEN(x) (case Happy_GHC_Exts.unsafeCoerce# x of { (Happy_GHC_Exts.I# i) -> i }) # define MK_ERROR_TOKEN(i) (Happy_GHC_Exts.unsafeCoerce# (Happy_GHC_Exts.I# i)) # define MK_TOKEN(x) (happyInTok (x)) #else # define GET_ERROR_TOKEN(x) (case x of { HappyErrorToken (Happy_GHC_Exts.I# i) -> i }) # define MK_ERROR_TOKEN(i) (HappyErrorToken (Happy_GHC_Exts.I# i)) # define MK_TOKEN(x) (HappyTerminal (x)) #endif #if defined(HAPPY_DEBUG) # define DEBUG_TRACE(s) (happyTrace (s)) Happy_Prelude.$ happyTrace string expr = Happy_System_IO_Unsafe.unsafePerformIO Happy_Prelude.$ do Happy_System_IO.hPutStr Happy_System_IO.stderr string Happy_Prelude.return expr #else # define DEBUG_TRACE(s) {- nothing -} #endif infixr 9 `HappyStk` data HappyStk a = HappyStk a (HappyStk a) ----------------------------------------------------------------------------- -- starting the parse happyParse start_state = happyNewToken start_state notHappyAtAll notHappyAtAll ----------------------------------------------------------------------------- -- Accepting the parse -- If the current token is ERROR_TOK, it means we've just accepted a partial -- parse (a %partial parser). We must ignore the saved token on the top of -- the stack in this case. happyAccept ERROR_TOK tk st sts (_ `HappyStk` ans `HappyStk` _) = happyReturn1 ans happyAccept j tk st sts (HappyStk ans _) = (happyTcHack j (happyTcHack st)) (happyReturn1 ans) ----------------------------------------------------------------------------- -- Arrays only: do the next action happyDoAction i tk st = DEBUG_TRACE("state: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ",\ttoken: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ ",\taction: ") case happyDecodeAction (happyNextAction i st) of HappyFail -> DEBUG_TRACE("failing.\n") happyFail i tk st HappyAccept -> DEBUG_TRACE("accept.\n") happyAccept i tk st HappyReduce rule -> DEBUG_TRACE("reduce (rule " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# rule) Happy_Prelude.++ ")") (happyReduceArr Happy_Data_Array.! (Happy_GHC_Exts.I# rule)) i tk st HappyShift new_state -> DEBUG_TRACE("shift, enter state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") happyShift new_state i tk st {-# INLINE happyNextAction #-} happyNextAction i st = case happyIndexActionTable i st of Happy_Prelude.Just (Happy_GHC_Exts.I# act) -> act Happy_Prelude.Nothing -> happyIndexOffAddr happyDefActions st {-# INLINE happyIndexActionTable #-} happyIndexActionTable i st | GTE(i, 0#), GTE(off, 0#), EQ(happyIndexOffAddr happyCheck off, i) -- i >= 0: Guard against INVALID_TOK (do the default action, which ultimately errors) -- off >= 0: Otherwise it's a default action -- equality check: Ensure that the entry in the compressed array is owned by st = Happy_Prelude.Just (Happy_GHC_Exts.I# (happyIndexOffAddr happyTable off)) | Happy_Prelude.otherwise = Happy_Prelude.Nothing where off = PLUS(happyIndexOffAddr happyActOffsets st, i) data HappyAction = HappyFail | HappyAccept | HappyReduce Happy_Int -- rule number | HappyShift Happy_Int -- new state deriving Happy_Prelude.Show {-# INLINE happyDecodeAction #-} happyDecodeAction :: Happy_Int -> HappyAction happyDecodeAction 0# = HappyFail happyDecodeAction -1# = HappyAccept happyDecodeAction action | LT(action, 0#) = HappyReduce NEGATE(PLUS(action, 1#)) | Happy_Prelude.otherwise = HappyShift MINUS(action, 1#) {-# INLINE happyIndexGotoTable #-} happyIndexGotoTable nt st = happyIndexOffAddr happyTable off where off = PLUS(happyIndexOffAddr happyGotoOffsets st, nt) {-# INLINE happyIndexOffAddr #-} happyIndexOffAddr :: HappyAddr -> Happy_Int -> Happy_Int happyIndexOffAddr (HappyA# arr) off = #if __GLASGOW_HASKELL__ >= 901 Happy_GHC_Exts.int32ToInt# -- qualified import because it doesn't exist on older GHC's #endif #ifdef WORDS_BIGENDIAN -- The CI of `alex` tests this code path (Happy_GHC_Exts.word32ToInt32# (Happy_GHC_Exts.wordToWord32# (Happy_GHC_Exts.byteSwap32# (Happy_GHC_Exts.word32ToWord# (Happy_GHC_Exts.int32ToWord32# #endif (Happy_GHC_Exts.indexInt32OffAddr# arr off) #ifdef WORDS_BIGENDIAN ))))) #endif happyIndexRuleArr :: Happy_Int -> (# Happy_Int, Happy_Int #) happyIndexRuleArr r = (# nt, len #) where !(Happy_GHC_Exts.I# n_starts) = happy_n_starts offs = TIMES(MINUS(r,n_starts),2#) nt = happyIndexOffAddr happyRuleArr offs len = happyIndexOffAddr happyRuleArr PLUS(offs,1#) data HappyAddr = HappyA# Happy_GHC_Exts.Addr# ----------------------------------------------------------------------------- -- Shifting a token happyShift new_state ERROR_TOK tk st sts stk@(x `HappyStk` _) = -- See "Error Fixup" below let i = GET_ERROR_TOKEN(x) in DEBUG_TRACE("shifting the error token") happyDoAction i tk new_state (HappyCons st sts) stk happyShift new_state i tk st sts stk = happyNewToken new_state (HappyCons st sts) (MK_TOKEN(tk) `HappyStk` stk) -- happyReduce is specialised for the common cases. happySpecReduce_0 nt fn j tk st sts stk = happySeq fn (happyGoto nt j tk st (HappyCons st sts) (fn `HappyStk` stk)) happySpecReduce_1 nt fn j tk old_st sts@(HappyCons st _) (v1 `HappyStk` stk') = let r = fn v1 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happySpecReduce_2 nt fn j tk old_st (HappyCons _ sts@(HappyCons st _)) (v1 `HappyStk` v2 `HappyStk` stk') = let r = fn v1 v2 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happySpecReduce_3 nt fn j tk old_st (HappyCons _ (HappyCons _ sts@(HappyCons st _))) (v1 `HappyStk` v2 `HappyStk` v3 `HappyStk` stk') = let r = fn v1 v2 v3 in happyTcHack old_st (happySeq r (happyGoto nt j tk st sts (r `HappyStk` stk'))) happyReduce k nt fn j tk st sts stk = case happyDrop MINUS(k,(1# :: Happy_Int)) sts of sts1@(HappyCons st1 _) -> let r = fn stk in -- it doesn't hurt to always seq here... st `happyTcHack` happyDoSeq r (happyGoto nt j tk st1 sts1 r) happyMonadReduce k nt fn j tk st sts stk = case happyDrop k (HappyCons st sts) of sts1@(HappyCons st1 _) -> let drop_stk = happyDropStk k stk in j `happyTcHack` happyThen1 (fn stk tk) (\r -> happyGoto nt j tk st1 sts1 (r `HappyStk` drop_stk)) happyMonad2Reduce k nt fn j tk st sts stk = case happyDrop k (HappyCons st sts) of sts1@(HappyCons st1 _) -> let drop_stk = happyDropStk k stk off = happyIndexOffAddr happyGotoOffsets st1 off_i = PLUS(off, nt) new_state = happyIndexOffAddr happyTable off_i in j `happyTcHack` happyThen1 (fn stk tk) (\r -> happyNewToken new_state sts1 (r `HappyStk` drop_stk)) happyDrop 0# l = l happyDrop n (HappyCons _ t) = happyDrop MINUS(n,(1# :: Happy_Int)) t happyDropStk 0# l = l happyDropStk n (x `HappyStk` xs) = happyDropStk MINUS(n,(1#::Happy_Int)) xs ----------------------------------------------------------------------------- -- Moving to a new state after a reduction happyGoto nt j tk st = DEBUG_TRACE(", goto state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") happyDoAction j tk new_state where new_state = happyIndexGotoTable nt st {- Note [Error recovery] ~~~~~~~~~~~~~~~~~~~~~~~~ When there is no applicable action for the current lookahead token `tk`, happy enters error recovery mode. Depending on whether the grammar file declares the two action form `%error { abort } { report }` for Resumptive Error Handling, it works in one (not resumptive) or two phases (resumptive): 1. Fixup mode: Try to see if there is an action for the error token ERROR_TOK. If there is, do *not* emit an error and pretend instead that an `error` token was inserted. When there is no ERROR_TOK action, report an error. In non-resumptive error handling, calling the single error handler (e.g. `happyError`) will throw an exception and abort the parser. However, in resumptive error handling we enter *error resumption mode*. 2. Error resumption mode: After reporting the error (with `report`), happy will attempt to find a good state stack to resume parsing in. For each candidate stack, it discards input until one of the candidates resumes (i.e. shifts the current input). If no candidate resumes before the end of input, resumption failed and calls the `abort` function, to much the same effect as in non-resumptive error handling. Candidate stacks are declared by the grammar author using the special `catch` terminal and called "catch frames". This mechanism is described in detail in Note [happyResume]. The `catch` resumption mechanism (2) is what usually is associated with `error` in `bison` or `menhir`. Since `error` is used for the Fixup mechanism (1) above, we call the corresponding token `catch`. Furthermore, in constrast to `bison`, our implementation of `catch` non-deterministically considers multiple catch frames on the stack for resumption (See Note [Multiple catch frames]). Note [happyResume] ~~~~~~~~~~~~~~~~~~ `happyResume` implements the resumption mechanism from Note [Error recovery]. It is best understood by example. Consider Exp :: { String } Exp : '1' { "1" } | catch { "catch" } | Exp '+' Exp %shift { $1 Happy_Prelude.++ " + " Happy_Prelude.++ $3 } -- %shift: associate 1 + 1 + 1 to the right | '(' Exp ')' { "(" Happy_Prelude.++ $2 Happy_Prelude.++ ")" } The idea of the use of `catch` here is that upon encountering a parse error during expression parsing, we can gracefully degrade using the `catch` rule, still producing a partial syntax tree and keep on parsing to find further syntax errors. Let's trace the parser state for input 11+1, which will error out after shifting 1. After shifting, we have the following item stack (growing downwards and omitting transitive closure items): State 0: %start_parseExp -> . Exp State 5: Exp -> '1' . (Stack as a list of state numbers: [5,0].) As Note [Error recovery] describes, we will first try Fixup mode. That fails because no production can shift the `error` token. Next we try Error resumption mode. This works as follows: 1. Pop off the item stack until we find an item that can shift the `catch` token. (Implemented in `pop_items`.) * State 5 cannot shift catch. Pop. * State 0 can shift catch, which would transition into State 4: Exp -> catch . So record the *stack* `[4,0]` after doing the shift transition. We call this a *catch frame*, where the top is a *catch state*, corresponding to an item in which we just shifted a `catch` token. There can be multiple such catch stacks, see Note [Multiple catch frames]. 2. Discard tokens from the input until the lookahead can be shifted in one of the catch stacks. (Implemented in `discard_input_until_exp` and `some_catch_state_shifts`.) * We cannot shift the current lookahead '1' in state 4, so we discard * We *can* shift the next lookahead '+' in state 4, but only after reducing, which pops State 4 and goes to State 3: State 3: %start_parseExp -> Exp . Exp -> Exp . '+' Exp Here we can shift '+'. As you can see, to implement this machinery we need to simulate the operation of the LALR automaton, especially reduction (`happySimulateReduce`). Note [Multiple catch frames] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ For fewer spurious error messages, it can be beneficial to trace multiple catch items. Consider Exp : '1' | catch | Exp '+' Exp %shift | '(' Exp ')' Let's trace the parser state for input (;+1, which will error out after shifting (. After shifting, we have the following item stack (growing downwards): State 0: %start_parseExp -> . Exp State 6: Exp -> '(' . Exp ')' Upon error, we want to find items in the stack which can shift a catch token. Note that both State 0 and State 6 can shift a catch token, transitioning into State 4: Exp -> catch . Hence we record the catch frames `[4,6,0]` and `[4,0]` for possible resumption. Which catch frame do we pick for resumption? Note that resuming catch frame `[4,0]` will parse as "catch+1", whereas resuming the innermost frame `[4,6,0]` corresponds to parsing "(catch+1". The latter would keep discarding input until the closing ')' is found. So we will discard + and 1, leading to a spurious syntax error at the end of input, aborting the parse and never producing a partial syntax tree. Bad! It is far preferable to resume with catch frame `[4,0]`, where we can resume successfully on input +, so that is what we do. In general, we pick the catch frame for resumption that discards the least amount of input for a successful shift, preferring the topmost such catch frame. -} -- happyFail :: Happy_Int -> Token -> Happy_Int -> _ -- This function triggers Note [Error recovery]. -- If the current token is ERROR_TOK, phase (1) has failed and we might try -- phase (2). happyFail ERROR_TOK = happyFixupFailed happyFail i = happyTryFixup i -- Enter Error Fixup (see Note [Error recovery]): -- generate an error token, save the old token and carry on. -- When a `happyShift` accepts the error token, we will pop off the error token -- to resume parsing with the current lookahead `i`. happyTryFixup i tk action sts stk = DEBUG_TRACE("entering `error` fixup.\n") happyDoAction ERROR_TOK tk action sts (MK_ERROR_TOKEN(i) `HappyStk` stk) -- NB: `happyShift` will simply pop the error token and carry on with -- `tk`. Hence we don't change `tk` in the call here -- See Note [Error recovery], phase (2). -- Enter resumption mode after reporting the error by calling `happyResume`. happyFixupFailed tk st sts (x `HappyStk` stk) = let i = GET_ERROR_TOKEN(x) in DEBUG_TRACE("`error` fixup failed.\n") let resume = happyResume i tk st sts stk expected = happyExpectedTokens st sts in happyReport i tk expected resume -- happyResume :: Happy_Int -> Token -> Happy_Int -> _ -- See Note [happyResume] happyResume i tk st sts stk = pop_items [] st sts stk where !(Happy_GHC_Exts.I# n_starts) = happy_n_starts -- this is to test whether we have a start token !(Happy_GHC_Exts.I# eof_i) = happy_n_terms Happy_Prelude.- 1 -- this is the token number of the EOF token happy_list_to_list :: Happy_IntList -> [Happy_Prelude.Int] happy_list_to_list (HappyCons st sts) | LT(st, n_starts) = [(Happy_GHC_Exts.I# st)] | Happy_Prelude.otherwise = (Happy_GHC_Exts.I# st) : happy_list_to_list sts -- See (1) of Note [happyResume] pop_items catch_frames st sts stk | LT(st, n_starts) = DEBUG_TRACE("reached start state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", ") if Happy_Prelude.null catch_frames_new then DEBUG_TRACE("no resumption.\n") happyAbort else DEBUG_TRACE("now discard input, trying to anchor in states " Happy_Prelude.++ Happy_Prelude.show (Happy_Prelude.map (happy_list_to_list . Happy_Prelude.fst) (Happy_Prelude.reverse catch_frames_new)) Happy_Prelude.++ ".\n") discard_input_until_exp i tk (Happy_Prelude.reverse catch_frames_new) | (HappyCons st1 sts1) <- sts, _ `HappyStk` stk1 <- stk = pop_items catch_frames_new st1 sts1 stk1 where !catch_frames_new | HappyShift new_state <- happyDecodeAction (happyNextAction CATCH_TOK st) , DEBUG_TRACE("can shift catch token in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", into state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# new_state) Happy_Prelude.++ "\n") Happy_Prelude.null (Happy_Prelude.filter (\(HappyCons _ (HappyCons h _),_) -> EQ(st,h)) catch_frames) = (HappyCons new_state (HappyCons st sts), MK_ERROR_TOKEN(i) `HappyStk` stk):catch_frames -- MK_ERROR_TOKEN(i) is just some dummy that should not be accessed by user code | Happy_Prelude.otherwise = DEBUG_TRACE("already shifted or can't shift catch in " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ "\n") catch_frames -- See (2) of Note [happyResume] discard_input_until_exp i tk catch_frames | Happy_Prelude.Just (HappyCons st (HappyCons catch_st sts), catch_frame) <- some_catch_state_shifts i catch_frames = DEBUG_TRACE("found expected token in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ " after shifting from " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# catch_st) Happy_Prelude.++ ": " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ "\n") happyDoAction i tk st (HappyCons catch_st sts) catch_frame | EQ(i,eof_i) -- is i EOF? = DEBUG_TRACE("reached EOF, cannot resume. abort parse :(\n") happyAbort | Happy_Prelude.otherwise = DEBUG_TRACE("discard token " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ "\n") happyLex (\eof_tk -> discard_input_until_exp eof_i eof_tk catch_frames) -- eof (\i tk -> discard_input_until_exp i tk catch_frames) -- not eof some_catch_state_shifts _ [] = DEBUG_TRACE("no catch state could shift.\n") Happy_Prelude.Nothing some_catch_state_shifts i catch_frames@(((HappyCons st sts),_):_) = try_head i st sts catch_frames where try_head i st sts catch_frames = -- PRECONDITION: head catch_frames = (HappyCons st sts) DEBUG_TRACE("trying token " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ " in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ": ") case happyDecodeAction (happyNextAction i st) of HappyFail -> DEBUG_TRACE("fail.\n") some_catch_state_shifts i (Happy_Prelude.tail catch_frames) HappyAccept -> DEBUG_TRACE("accept.\n") Happy_Prelude.Just (Happy_Prelude.head catch_frames) HappyShift _ -> DEBUG_TRACE("shift.\n") Happy_Prelude.Just (Happy_Prelude.head catch_frames) HappyReduce r -> case happySimulateReduce r st sts of (HappyCons st1 sts1) -> try_head i st1 sts1 catch_frames happySimulateReduce r st sts = DEBUG_TRACE("simulate reduction of rule " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# r) Happy_Prelude.++ ", ") let (# nt, len #) = happyIndexRuleArr r in DEBUG_TRACE("nt " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# nt) Happy_Prelude.++ ", len: " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# len) Happy_Prelude.++ ", new_st ") let !(sts1@(HappyCons st1 _)) = happyDrop len (HappyCons st sts) new_st = happyIndexGotoTable nt st1 in DEBUG_TRACE(Happy_Prelude.show (Happy_GHC_Exts.I# new_st) Happy_Prelude.++ ".\n") (HappyCons new_st sts1) happyTokenToString :: Happy_Prelude.Int -> Happy_Prelude.String happyTokenToString i = happyTokenStrings Happy_Prelude.!! (i Happy_Prelude.- 2) -- 2: errorTok, catchTok happyExpectedTokens :: Happy_Int -> Happy_IntList -> [Happy_Prelude.String] -- Upon a parse error, we want to suggest tokens that are expected in that -- situation. This function computes such tokens. -- It works by examining the top of the state stack. -- For every token number that does a shift transition, record that token number. -- For every token number that does a reduce transition, simulate that reduction -- on the state state stack and repeat. -- The recorded token numbers are then formatted with 'happyTokenToString' and -- returned. happyExpectedTokens st sts = DEBUG_TRACE("constructing expected tokens.\n") Happy_Prelude.map happyTokenToString (search_shifts st sts []) where search_shifts st sts shifts = Happy_Prelude.foldr (add_action st sts) shifts (distinct_actions st) add_action st sts (Happy_GHC_Exts.I# i, Happy_GHC_Exts.I# act) shifts = DEBUG_TRACE("found action in state " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# st) Happy_Prelude.++ ", input " Happy_Prelude.++ Happy_Prelude.show (Happy_GHC_Exts.I# i) Happy_Prelude.++ ", " Happy_Prelude.++ Happy_Prelude.show (happyDecodeAction act) Happy_Prelude.++ "\n") case happyDecodeAction act of HappyFail -> shifts HappyAccept -> shifts -- This would always be %eof or error... Not helpful HappyShift _ -> Happy_Prelude.insert (Happy_GHC_Exts.I# i) shifts HappyReduce r -> case happySimulateReduce r st sts of (HappyCons st1 sts1) -> search_shifts st1 sts1 shifts distinct_actions st -- The (token number, action) pairs of all actions in the given state = ((-1), (Happy_GHC_Exts.I# (happyIndexOffAddr happyDefActions st))) : [ (i, act) | i <- [begin_i..happy_n_terms], act <- get_act row_off i ] where row_off = happyIndexOffAddr happyActOffsets st begin_i = 2 -- +2: errorTok,catchTok get_act off (Happy_GHC_Exts.I# i) -- happyIndexActionTable with cached row offset | let off_i = PLUS(off,i) , GTE(off_i,0#) , EQ(happyIndexOffAddr happyCheck off_i,i) = [(Happy_GHC_Exts.I# (happyIndexOffAddr happyTable off_i))] | Happy_Prelude.otherwise = [] -- Internal happy errors: notHappyAtAll :: a notHappyAtAll = Happy_Prelude.error "Internal Happy parser panic. This is not supposed to happen! Please open a bug report at https://github.com/haskell/happy/issues.\n" ----------------------------------------------------------------------------- -- Hack to get the typechecker to accept our action functions happyTcHack :: Happy_Int -> a -> a happyTcHack x y = y {-# INLINE happyTcHack #-} ----------------------------------------------------------------------------- -- Seq-ing. If the --strict flag is given, then Happy emits -- happySeq = happyDoSeq -- otherwise it emits -- happySeq = happyDontSeq happyDoSeq, happyDontSeq :: a -> b -> b happyDoSeq a b = a `Happy_GHC_Exts.seq` b happyDontSeq a b = b ----------------------------------------------------------------------------- -- Don't inline any functions from the template. GHC has a nasty habit -- of deciding to inline happyGoto everywhere, which increases the size of -- the generated parser quite a bit. {-# NOINLINE happyDoAction #-} {-# NOINLINE happyTable #-} {-# NOINLINE happyCheck #-} {-# NOINLINE happyActOffsets #-} {-# NOINLINE happyGotoOffsets #-} {-# NOINLINE happyDefActions #-} {-# NOINLINE happyShift #-} {-# NOINLINE happySpecReduce_0 #-} {-# NOINLINE happySpecReduce_1 #-} {-# NOINLINE happySpecReduce_2 #-} {-# NOINLINE happySpecReduce_3 #-} {-# NOINLINE happyReduce #-} {-# NOINLINE happyMonadReduce #-} {-# NOINLINE happyGoto #-} {-# NOINLINE happyFail #-} -- end of Happy Template. happy-lib-2.1.7/frontend/src/Happy/Frontend/PrettyGrammar.hs0000644000000000000000000000433707346545000022206 0ustar0000000000000000{-# LANGUAGE CPP #-} module Happy.Frontend.PrettyGrammar where #if MIN_VERSION_base(4,11,0) import Prelude hiding ((<>)) #endif import Happy.Frontend.AbsSyn render :: Doc -> String render = maybe "" ($ "") ppAbsSyn :: AbsSyn String -> Doc ppAbsSyn (AbsSyn ds rs) = vsep (vcat (map ppDirective ds) : map ppRule rs) ppDirective :: Directive a -> Doc ppDirective dir = case dir of TokenNonassoc xs -> prec "%nonassoc" xs TokenRight xs -> prec "%right" xs TokenLeft xs -> prec "%left" xs _ -> empty where prec x xs = text x <+> hsep (map text xs) ppRule :: Rule String -> Doc ppRule (Rule name _ prods _) = text name $$ vcat (zipWith (<+>) starts (map ppProd prods)) where starts = text " :" : repeat (text " |") ppProd :: Prod String -> Doc ppProd (Prod ts _ _ p) = psDoc <+> ppPrec p where psDoc = if null ts then text "{- empty -}" else hsep (map ppTerm ts) ppPrec :: Prec -> Doc ppPrec PrecNone = empty ppPrec PrecShift = text "%shift" ppPrec (PrecId x) = text "%prec" <+> text x ppTerm :: Term -> Doc ppTerm (App x ts) = text x <> ppTuple (map ppTerm ts) ppTuple :: [Doc] -> Doc ppTuple [] = empty ppTuple xs = parens (hsep (punctuate comma xs)) -------------------------------------------------------------------------------- -- Pretty printing combinator type Doc = Maybe ShowS empty :: Doc empty = Nothing punctuate :: Doc -> [Doc] -> [Doc] punctuate _ [] = [] punctuate _ [x] = [x] punctuate sep (x : xs) = (x <> sep) : punctuate sep xs comma :: Doc comma = char ',' char :: Char -> Doc char x = Just (showChar x) text :: String -> Doc text x = if null x then Nothing else Just (showString x) (<+>) :: Doc -> Doc -> Doc Nothing <+> y = y x <+> Nothing = x x <+> y = x <> char ' ' <> y (<>) :: Doc -> Doc -> Doc Nothing <> y = y x <> Nothing = x Just x <> Just y = Just (x . y) ($$) :: Doc -> Doc -> Doc Nothing $$ y = y x $$ Nothing = x x $$ y = x <> char '\n' <> y hsep :: [Doc] -> Doc hsep = hcat . punctuate (char ' ') vcat :: [Doc] -> Doc vcat = foldr ($$) empty vsep :: [Doc] -> Doc vsep = vcat . punctuate (char '\n') parens :: Doc -> Doc parens x = char '(' <> x <> char ')' hcat :: [Doc] -> Doc hcat = foldr (<>) empty happy-lib-2.1.7/grammar/src/Happy/0000755000000000000000000000000007346545000015075 5ustar0000000000000000happy-lib-2.1.7/grammar/src/Happy/Grammar.lhs0000644000000000000000000001515407346545000017201 0ustar0000000000000000/----------------------------------------------------------------------------- The Grammar data type. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > {-# LANGUAGE GeneralizedNewtypeDeriving #-} > -- | This module exports the 'Grammar' data type, which > module Happy.Grammar ( > Name (..), > > Production(..), > TokenSpec(..), > Grammar(..), > AttributeGrammarExtras(..), > Priority(..), > Assoc(..), > ErrorHandlerInfo(..), > ErrorExpectedMode(..), > Directives(..), > > errorName, errorTok, catchName, catchTok, > startName, dummyName, firstStartTok, dummyTok, > eofName, epsilonTok, > ) where > import Data.Array > import Happy.Grammar.ExpressionWithHole (ExpressionWithHole) > data Production eliminator > = Production Name [Name] (eliminator,[Int]) Priority > deriving Show > data TokenSpec > > -- | The token is just a fixed value > = TokenFixed String > > -- | The token is an expression involving the value of the lexed token. > | TokenWithValue ExpressionWithHole > > deriving (Eq, Show) > data Grammar eliminator > = Grammar { > productions :: [Production eliminator], > lookupProdNo :: Int -> Production eliminator, > lookupProdsOfName :: Name -> [Int], > token_specs :: [(Name, TokenSpec)], > terminals :: [Name], > non_terminals :: [Name], > starts :: [(String,Name,Name,Bool)], > types :: Array Name (Maybe String), > token_names :: Array Name String, > first_nonterm :: Name, > first_term :: Name, > eof_term :: Name, > priorities :: [(Name,Priority)] > } > data AttributeGrammarExtras > = AttributeGrammarExtras { > attributes :: [(String,String)], > attributetype :: String > } > instance Show eliminator => Show (Grammar eliminator) where > showsPrec _ (Grammar > { productions = p > , token_specs = t > , terminals = ts > , non_terminals = nts > , starts = sts > , types = tys > , token_names = e > , first_nonterm = fnt > , first_term = ft > , eof_term = eof > }) > = showString "productions = " . shows p > . showString "\ntoken_specs = " . shows t > . showString "\nterminals = " . shows ts > . showString "\nnonterminals = " . shows nts > . showString "\nstarts = " . shows sts > . showString "\ntypes = " . shows tys > . showString "\ntoken_names = " . shows e > . showString "\nfirst_nonterm = " . shows fnt > . showString "\nfirst_term = " . shows ft > . showString "\neof = " . shows eof > . showString "\n" > data Assoc = LeftAssoc | RightAssoc | None > deriving Show > data Priority = No | Prio Assoc Int | PrioLowest > deriving Show > instance Eq Priority where > No == No = True > Prio _ i == Prio _ j = i == j > _ == _ = False > data ErrorHandlerInfo > = DefaultErrorHandler > -- ^ Default handler `happyError` > | CustomErrorHandler String > -- ^ Call this handler on error. > | ResumptiveErrorHandler String String > -- ^ `ResumptiveErrorHandler abort report`: > -- Upon encountering a parse error, call non-fatal function `report`. > -- Then try to resume parsing by finding a catch production. > -- If that ultimately fails, call `abort`. > data ErrorExpectedMode > = NoExpected -- ^ Neither `%errorhandertype explist` nor `%error.expected` > | OldExpected -- ^ `%errorhandertype explist`. The error handler takes a pair `(Token, [String])` > | NewExpected -- ^ `%error.expected`. The error handler takes two (or more) args `Token -> [String] -> ...`. > deriving Eq > -- | Stuff like `\%monad`, `\%expect` > data Directives > = Directives { > token_type :: String, > imported_identity :: Bool, > monad :: (Bool,String,String,String,String), > expect :: Maybe Int, > lexer :: Maybe (String,String), > error_handler :: ErrorHandlerInfo, > error_expected :: ErrorExpectedMode > -- ^ Error handler specified in `error_handler` takes > -- a `[String]` carrying the pretty-printed expected tokens > } ----------------------------------------------------------------------------- -- Magic name values > newtype Name > = MkName { getName :: Int } > deriving ( Read, Show > , Eq, Ord, Enum, Ix) All the tokens in the grammar are mapped onto integers, for speed. The namespace is broken up as follows: epsilon = 0 error = 1 dummy = 2 %start = 3..s non-terminals = s..n terminals = n..m %eof = m where n_nonterminals = n - 3 (including %starts) n_terminals = 1{-error-} + (m-n) + 1{-eof-} (including error and %eof) These numbers are deeply magical, change at your own risk. Several other places rely on these being arranged as they are, including ProduceCode.lhs and the various HappyTemplates. Unfortunately this means you can't tell whether a given token is a terminal or non-terminal without knowing the boundaries of the namespace, which are kept in the Grammar structure. In hindsight, this was probably a bad idea. In normal and GHC-based parsers, these numbers are also used in the generated grammar itself, except that the error token is mapped to -1. For array-based parsers, see the note in Tabular/LALR.lhs. > startName, eofName, errorName, catchName, dummyName :: String > startName = "%start" -- with a suffix, like %start_1, %start_2 etc. > eofName = "%eof" > errorName = "error" > catchName = "catch" > dummyName = "%dummy" -- shouldn't occur in the grammar anywhere TODO: Should rename firstStartTok to firstStartName! It denotes the *Name* of the first start non-terminal and semantically has nothing to do with Tokens at all. > firstStartTok, dummyTok, errorTok, catchTok, epsilonTok :: Name > firstStartTok = MkName 4 > dummyTok = MkName 3 > catchTok = MkName 2 > errorTok = MkName 1 > epsilonTok = MkName 0 happy-lib-2.1.7/grammar/src/Happy/Grammar/0000755000000000000000000000000007346545000016463 5ustar0000000000000000happy-lib-2.1.7/grammar/src/Happy/Grammar/ExpressionWithHole.hs0000644000000000000000000000064707346545000022631 0ustar0000000000000000module Happy.Grammar.ExpressionWithHole where -- | The overall expression is -- 'tokLeft ++ substitutedForHole ++ tokRight'. data ExpressionWithHole = ExpressionWithHole { exprLeft :: String, exprRight :: String } deriving (Eq, Show) substExpressionWithHole :: ExpressionWithHole -> String -> String substExpressionWithHole (ExpressionWithHole l r) = \repr -> l ++ repr ++ r happy-lib-2.1.7/happy-lib.cabal0000644000000000000000000001370107346545000014452 0ustar0000000000000000cabal-version: 3.0 name: happy-lib version: 2.1.7 license: BSD-2-Clause copyright: (c) Andy Gill, Simon Marlow author: Andy Gill and Simon Marlow maintainer: https://github.com/haskell/happy bug-reports: https://github.com/haskell/happy/issues stability: stable homepage: https://www.haskell.org/happy/ synopsis: Happy is a parser generator for Haskell implemented using this library category: Development build-type: Simple Description: Happy is a parser generator for Haskell. Given a grammar specification in BNF, Happy generates Haskell code to parse the grammar. Happy works in a similar way to the @yacc@ tool for C. This library provides the following functionality: * Data type definitions for the Grammar AST type, capturing the information in .y-files (Happy.Grammar) * A parser for happy grammar files (.y) to produce a Grammar (Happy.Frontend.*) * Implementations of the text book algorithms that compute the LR action and goto tables for the given 'Grammar' (Happy.Tabular.*) * An LALR code generator to produce table-driven, deterministic parsing code in Haskell (Happy.Backend.LALR.*) * A (less maintained) GLR code generator to produce table-driven, non-deterministic parsing code in Haskell, where ambiguous parses produce multiple parse trees (Happy.Backend.GLR.*) tested-with: GHC == 9.12.2 GHC == 9.10.2 GHC == 9.8.4 GHC == 9.6.7 GHC == 9.4.8 GHC == 9.2.8 GHC == 9.0.2 GHC == 8.10.7 GHC == 8.8.4 GHC == 8.6.5 GHC == 8.4.4 GHC == 8.2.2 GHC == 8.0.2 extra-doc-files: ChangeLog.md README.md extra-source-files: frontend/bootstrap.sh frontend/boot-src/Parser.ly frontend/boot-src/AttrGrammarParser.ly data-dir: data data-files: HappyTemplate.hs GLR_Base.hs GLR_Lib.hs source-repository head type: git location: https://github.com/haskell/happy.git common common-stanza default-language: Haskell98 default-extensions: CPP, MagicHash, FlexibleContexts, NamedFieldPuns, PatternGuards ghc-options: -Wall -Wno-incomplete-uni-patterns library grammar import: common-stanza hs-source-dirs: grammar/src exposed-modules: Happy.Grammar Happy.Grammar.ExpressionWithHole build-depends: base < 5, array library frontend import: common-stanza hs-source-dirs: frontend/src exposed-modules: Happy.Frontend, Happy.Frontend.AbsSyn, Happy.Frontend.Mangler, Happy.Frontend.PrettyGrammar build-depends: base < 5, array, transformers, containers, mtl, grammar other-modules: Happy.Frontend.ParseMonad Happy.Frontend.ParseMonad.Class Happy.Frontend.Mangler.Monad Happy.Frontend.Parser Happy.Frontend.Lexer Happy.Frontend.ParamRules Happy.Frontend.AttrGrammar Happy.Frontend.AttrGrammar.Parser Happy.Frontend.AttrGrammar.Mangler library tabular import: common-stanza hs-source-dirs: tabular/src exposed-modules: Happy.Tabular, Happy.Tabular.First, Happy.Tabular.Info, Happy.Tabular.LALR, Happy.Tabular.NameSet, -- The following 3 lines constitute a workaround for cabal#10687, triggered by #328. -- The setup in #328 triggers a clashing use of Paths_happy_lib between -- `backend-lalr` and `backend-glr` in the presence of --libsubdir. -- The workaround moves Paths_happy_lib into `tabular` where it can be shared. -- However, in order to really refer to the Paths_happy_lib from `tabular`, -- we have to reexport it under a different name, because otherwise we get -- -- These modules are needed for compilation but not listed in your .cabal file's other-modules for ‘happy-lib-2.1.3-inplace-backend-lalr’ : -- Paths_happy_libsuppress -- Paths_happy_lib reexported-modules: Paths_happy_lib as Happy.Paths autogen-modules: Paths_happy_lib -- end of workaround. Delete the workaround and use the bash script posted at -- https://github.com/haskell/happy/issues/328#issuecomment-2597598320 -- to test whether the workaround is still needed. build-depends: base < 5, array, containers, grammar library backend-lalr import: common-stanza hs-source-dirs: backend-lalr/src exposed-modules: Happy.Backend.LALR, Happy.Backend.LALR.ProduceCode build-depends: base < 5, array, grammar, tabular library backend-glr import: common-stanza hs-source-dirs: backend-glr/src exposed-modules: Happy.Backend.GLR, Happy.Backend.GLR.ProduceCode build-depends: base < 5, array, grammar, tabular library import: common-stanza reexported-modules: Happy.Grammar, Happy.Grammar.ExpressionWithHole, Happy.Frontend, Happy.Frontend.AbsSyn, Happy.Frontend.Mangler, Happy.Frontend.PrettyGrammar, Happy.Tabular, Happy.Tabular.First, Happy.Tabular.Info, Happy.Tabular.LALR, Happy.Tabular.NameSet, Happy.Backend.LALR, Happy.Backend.LALR.ProduceCode, Happy.Backend.GLR, Happy.Backend.GLR.ProduceCode build-depends: base >= 4.9 && < 5, array >= 0.5, containers >= 0.4.2, transformers >= 0.5.6.2, mtl >= 2.2.1, -- NB: omit the `happy-lib:` prefix in happy-lib:grammar. -- Otherwise we unnecessarily break Cabal < 3.4 grammar, tabular, frontend, backend-lalr, backend-glr happy-lib-2.1.7/tabular/src/Happy/0000755000000000000000000000000007346545000015101 5ustar0000000000000000happy-lib-2.1.7/tabular/src/Happy/Tabular.lhs0000644000000000000000000000721507346545000017210 0ustar0000000000000000> module Happy.Tabular ( > Tables(..), > genTables, > SelectReductions, > select_all_reductions, > select_first_reduction > ) where > import Happy.Grammar > import Happy.Tabular.First > import Happy.Tabular.LALR > import Happy.Tabular.NameSet (NameSet) > import Data.Array( Array, assocs, elems, (!) ) > import Data.List ( nub ) > data Tables = > Tables { > lr0items :: [ItemSetWithGotos], > la_spont :: [(Int, Lr0Item, NameSet)], > la_prop :: Array Int [(Lr0Item, Int, Lr0Item)], > lookaheads :: Array Int [(Lr0Item, NameSet)], > lr1items :: [ ([Lr1Item], [(Name,Int)]) ], > gotoTable :: GotoTable, > actionTable :: ActionTable, > conflicts :: (Array Int (Int,Int), (Int,Int)), > redundancies :: ([Int], [String]) > } > genTables :: > SelectReductions -> -- for computing used/unused > Grammar e -> > Tables > genTables select_reductions g = > let first = {-# SCC "First" #-} (mkFirst g) > closures = {-# SCC "Closures" #-} (precalcClosure0 g) > lr0items = {-# SCC "LR0_Sets" #-} (genLR0items g closures) > (la_spont, la_prop) > = {-# SCC "Prop" #-} (propLookaheads g lr0items first) > lookaheads = {-# SCC "Calc" #-} (calcLookaheads (length lr0items) la_spont la_prop) > lr1items = {-# SCC "Merge" #-} (mergeLookaheadInfo lookaheads lr0items) > gotoTable = {-# SCC "Goto" #-} (genGotoTable g lr0items) > actionTable = {-# SCC "Action" #-} (genActionTable g first lr1items) > conflicts = {-# SCC "Conflict" #-} (countConflicts actionTable) > redundancies = find_redundancies select_reductions g actionTable > in Tables { lr0items, la_spont, la_prop, lookaheads, lr1items, > gotoTable, actionTable, conflicts, redundancies } ----------------------------------------------------------------------------- Find unused rules and tokens > find_redundancies > :: SelectReductions -> Grammar e -> ActionTable -> ([Int], [String]) > find_redundancies extract_reductions g action_table = > (unused_rules, map (env !) unused_terminals) > where > Grammar { terminals = terms, > token_names = env, > eof_term = eof, > starts = starts', > productions = productions' > } = g > actions = concat (map assocs (elems action_table)) > start_rules = [ 0 .. (length starts' - 1) ] > used_rules = start_rules ++ > nub [ r | (_,a) <- actions, r <- extract_reductions a ] > used_tokens = errorTok : catchTok : eof : > nub [ t | (t,a) <- actions, is_shift a ] > n_prods = length productions' > unused_terminals = filter (`notElem` used_tokens) terms > unused_rules = filter (`notElem` used_rules ) [0..n_prods-1] > is_shift :: LRAction -> Bool > is_shift (LR'Shift _ _) = True > is_shift (LR'Multiple _ LR'Shift{}) = True > is_shift _ = False --- selects what counts as a reduction when calculating used/unused > type SelectReductions = LRAction -> [Int] > select_all_reductions :: SelectReductions > select_all_reductions = go > where go (LR'Reduce r _) = [r] > go (LR'Multiple as a) = concatMap go (a : as) > go _ = [] > select_first_reduction :: SelectReductions > select_first_reduction = go > where go (LR'Reduce r _) = [r] > go (LR'Multiple _ a) = go a -- eg R/R conflict > go _ = [] happy-lib-2.1.7/tabular/src/Happy/Tabular/0000755000000000000000000000000007346545000016473 5ustar0000000000000000happy-lib-2.1.7/tabular/src/Happy/Tabular/First.lhs0000644000000000000000000000466307346545000020303 0ustar0000000000000000----------------------------------------------------------------------------- Implementation of FIRST (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Happy.Tabular.First ( mkFirst, mkClosure ) where > import Happy.Tabular.NameSet ( NameSet ) > import qualified Happy.Tabular.NameSet as Set > import Happy.Grammar > import Data.Maybe (fromMaybe) \subsection{Utilities} > joinSymSets :: (a -> NameSet) -> [a] -> NameSet > joinSymSets f = foldr go (Set.singleton epsilonTok) . map f > where > go h b > | Set.member epsilonTok h = Set.delete epsilonTok h `Set.union` b > | otherwise = h @mkClosure@ makes a closure, when given a comparison and iteration loop. It's a fixed point computation, we keep applying the function over the input until it does not change. Be careful, because if the functional always makes the object different, This will never terminate. > mkClosure :: (a -> a -> Bool) -> (a -> a) -> a -> a > mkClosure eq f = until (\x -> eq x (f x)) f \subsection{Implementation of FIRST} > mkFirst :: Grammar e -> [Name] -> NameSet > mkFirst (Grammar { first_term = fst_term > , lookupProdNo = prodNo > , lookupProdsOfName = prodsOfName > , non_terminals = nts > }) > = joinSymSets (\h -> fromMaybe (Set.singleton h) (lookup h env)) > where > env = mkClosure (==) (updateFirstSets fst_term prodNo prodsOfName) [(name,Set.empty) | name <- nts] > updateFirstSets :: Name -> (a -> Production e) -> (Name -> [a]) -> [(Name, NameSet)] > -> [(Name, NameSet)] > updateFirstSets fst_term prodNo prodsOfName env = [ (nm, nextFstSet nm) > | (nm,_) <- env ] > where > terminalP :: Name -> Bool > terminalP s = s >= fst_term > currFstSet :: Name -> NameSet > currFstSet s | s == errorTok || s == catchTok || terminalP s = Set.singleton s > | otherwise = maybe (error "attempted FIRST(e) :-(") > id (lookup s env) > nextFstSet :: Name -> NameSet > nextFstSet s | terminalP s = Set.singleton s > | otherwise = Set.unions [ joinSymSets currFstSet rhs > | rl <- prodsOfName s > , let Production _ rhs _ _ = prodNo rl ] happy-lib-2.1.7/tabular/src/Happy/Tabular/Info.lhs0000644000000000000000000001703707346545000020106 0ustar0000000000000000----------------------------------------------------------------------------- Generating info files. (c) 1993-2001 Andy Gill, Simon Marlow ----------------------------------------------------------------------------- > module Happy.Tabular.Info (genInfoFile) where > import Data.Set ( Set ) > import qualified Data.Set as Set hiding ( Set ) > import Happy.Grammar > import Happy.Grammar.ExpressionWithHole ( substExpressionWithHole ) > import Happy.Tabular.LALR ( Lr0Item(..), LRAction(..), Goto(..), GotoTable, ActionTable ) > import Data.Array > import Data.List (nub) > import Data.Version ( Version, showVersion ) Produce a file of parser information, useful for debugging the parser. > genInfoFile > :: [Set Lr0Item] > -> Grammar e > -> ActionTable > -> GotoTable > -> Array Int (Int,Int) > -> String > -> [Int] -- unused rules > -> [String] -- unused terminals > -> Version > -> String > genInfoFile items > (Grammar { productions = prods > , lookupProdNo = lookupProd > , lookupProdsOfName = lookupProdNos > , non_terminals = nonterms > , token_names = env > , token_specs = tokens > }) > action goto conflictArray filename unused_rules unused_terminals version > = (showHeader > . showConflicts > . showUnused > . showProductions > . showTerminals > . showNonTerminals > . showStates > . showStats > ) "" > where > showHeader > = banner ("Info file generated by Happy Version " ++ > showVersion version ++ " from " ++ filename) > showConflicts > = str "\n" > . foldr (.) id (map showConflictsState (assocs conflictArray)) > . str "\n" > showConflictsState (_, (0,0)) = id > showConflictsState (state, (sr,rr)) > = str "state " > . shows state > . str " contains " > . interleave' " and " ( > (if sr /= 0 > then [ shows sr . str " shift/reduce conflicts" ] > else []) ++ > if rr /= 0 > then [ shows rr . str " reduce/reduce conflicts" ] > else []) > . str ".\n" > showUnused = > (case unused_rules of > [] -> id > _ -> interleave "\n" ( > map (\r -> str "rule " > . shows r > . str " is unused") > unused_rules) > . str "\n") > . (case unused_terminals of > [] -> id > _ -> interleave "\n" ( > map (\t -> str "terminal " > . str t > . str " is unused") > unused_terminals) > . str "\n") > showProductions = > banner "Grammar" > . interleave "\n" (zipWith showProduction prods [ 0 :: Int .. ]) > . str "\n" > showProduction (Production nt toks _sem _prec) i > = ljuststr 50 ( > str "\t" > . showName nt > . str " -> " > . interleave " " (map showName toks)) > . str " (" . shows i . str ")" > showStates = > banner "States" > . interleave "\n" (zipWith showState > (map Set.toAscList items) [ 0 :: Int .. ]) > showState state n > = str "State ". shows n > . str "\n\n" > . interleave "\n" (map showItem selectedItems) > . str "\n" > . foldr (.) id (map showAction (assocs (action ! n))) > . str "\n" > . foldr (.) id (map showGoto (assocs (goto ! n))) > where > nonRuleItems = [ (Lr0 r d) | (Lr0 r d) <- state, d /= 0 ] > selectedItems = if null nonRuleItems then take 1 state else nonRuleItems > -- andreasabel, 2019-11-12, issue #161: > -- Items that start with a dot (@d == 0@) are usually added by completion > -- and thus redundant and dropped from the printout (@nonRuleItems@). > -- However, if the initial item started with a dot, it should not be dropped, > -- otherwise there will be no items left. Thus, should there be no items > -- not starting with a dot, we print the initial item. > showItem (Lr0 rule dot) > = ljuststr 50 ( > str "\t" > . showName nt > . str " -> " > . interleave " " (map showName beforeDot) > . str ". " > . interleave " " (map showName afterDot)) > . str " (rule " . shows rule . str ")" > where > Production nt toks _sem _prec = lookupProd rule > (beforeDot, afterDot) = splitAt dot toks > showAction (_, LR'Fail) > = id > showAction (t, act) > = str "\t" > . showJName 15 t > . showAction' act > . str "\n" > showAction' LR'MustFail > = str "fail" > showAction' (LR'Shift n _) > = str "shift, and enter state " > . shows n > showAction' LR'Accept > = str "accept" > showAction' (LR'Reduce n _) > = str "reduce using rule " > . shows n > showAction' (LR'Multiple as a) > = showAction' a > . str "\n" > . interleave "\n" > (map (\a' -> str "\t\t\t(" . showAction' a' . str ")") > (nub (filter (/= a) as))) > showAction' LR'Fail = error "showAction' LR'Fail: Unhandled case" > showGoto (_, NoGoto) > = id > showGoto (nt, Goto n) > = str "\t" > . showJName 15 nt > . str "goto state " > . shows n > . str "\n" > showTerminals > = banner "Terminals" > . interleave "\n" (map showTerminal tokens) > . str "\n" > showTerminal (t,s) > = str "\t" > . showJName 15 t > . str "{ " . showToken s . str " }" > showToken (TokenFixed s) = str s > showToken (TokenWithValue e) = str $ substExpressionWithHole e "$$" > showNonTerminals > = banner "Non-terminals" > . interleave "\n" (map showNonTerminal nonterms) > . str "\n" > showNonTerminal nt > = str "\t" > . showJName 15 nt > . (if (length nt_rules == 1) > then str " rule " > else str " rules ") > . foldr1 (\a b -> a . str ", " . b) nt_rules > where nt_rules = map shows (lookupProdNos nt) > showStats > = banner "Grammar Totals" > . str "Number of rules: " . shows (length prods) > . str "\nNumber of terminals: " . shows (length tokens) > . str "\nNumber of non-terminals: " . shows (length nonterms) > . str "\nNumber of states: " . shows (length items) > . str "\n" > nameOf n = env ! n > showName = str . nameOf > showJName j = str . ljustify j . nameOf > ljustify :: Int -> String -> String > ljustify n s = s ++ replicate (max 0 (n - length s)) ' ' > ljuststr :: Int -> (String -> String) -> String -> String > ljuststr n s = str (ljustify n (s "")) > banner :: String -> String -> String > banner s > = str "-----------------------------------------------------------------------------\n" > . str s > . str "\n-----------------------------------------------------------------------------\n" > str :: String -> String -> String > str = showString > interleave :: String -> [String -> String] -> String -> String > interleave s = foldr (\a b -> a . str s . b) id > interleave' :: String -> [String -> String] -> String -> String > interleave' s = foldr1 (\a b -> a . str s . b) happy-lib-2.1.7/tabular/src/Happy/Tabular/LALR.lhs0000644000000000000000000006400207346545000017737 0ustar0000000000000000----------------------------------------------------------------------------- Generation of LALR parsing tables. (c) 1993-1996 Andy Gill, Simon Marlow (c) 1997-2001 Simon Marlow ----------------------------------------------------------------------------- > module Happy.Tabular.LALR > (genActionTable, genGotoTable, genLR0items, precalcClosure0, > propLookaheads, calcLookaheads, mergeLookaheadInfo, countConflicts, > Lr0Item(..), Lr1Item(..), ItemSetWithGotos, LRAction(..), Lr1State, > ActionTable, GotoTable, Goto(..)) > where > import Happy.Tabular.First ( mkClosure ) > import Happy.Tabular.NameSet ( NameSet ) > import qualified Happy.Tabular.NameSet as NameSet > import Happy.Grammar > import Data.IntSet ( IntSet ) > import qualified Data.IntSet as IntSet hiding ( IntSet ) > import Data.Set ( Set ) > import qualified Data.Set as Set hiding ( Set ) > import Control.Monad (guard) > import Control.Monad.ST > import Data.Array.ST > import Data.Array as Array > import Data.List (nub,foldl',groupBy,sortBy) > import Data.Function (on) > import Data.Maybe (listToMaybe, maybeToList) > unionMap :: (Ord b) => (a -> Set b) -> Set a -> Set b > unionMap f = Set.foldr (Set.union . f) Set.empty > unionIntMap :: (Int -> IntSet) -> IntSet -> IntSet > unionIntMap f = IntSet.foldr (IntSet.union . f) IntSet.empty > unionNameMap :: (Name -> NameSet) -> NameSet -> NameSet > unionNameMap f = NameSet.foldr (NameSet.union . f) NameSet.empty ----------------------------------------------------------------------------- This means rule $a$, with dot at $b$ (all starting at 0) > data Lr0Item = Lr0 {-#UNPACK#-}!Int {-#UNPACK#-}!Int -- (rule, dot) > deriving (Eq,Ord,Show) > data Lr1Item = Lr1 {-#UNPACK#-}!Int {-#UNPACK#-}!Int NameSet -- (rule, dot, lookahead) > deriving (Show) > type RuleList = [Lr0Item] > type ItemSetWithGotos = (Set Lr0Item, [(Name,Int)]) > data LRAction = LR'Shift Int Priority -- state number and priority > | LR'Reduce Int Priority-- rule no and priority > | LR'Accept -- :-) > | LR'Fail -- :-( > | LR'MustFail -- :-( > | LR'Multiple [LRAction] LRAction -- conflict > deriving (Eq,Show) > type ActionTable = Array Int{-state-} (Array Name{-terminal#-} LRAction) > type GotoTable = Array Int{-state-} (Array Name{-nonterminal #-} Goto) > data Goto = Goto Int | NoGoto > deriving (Eq, Show) ----------------------------------------------------------------------------- Token numbering in an array-based parser: % Action Table We have an action table, indexed by states in the y direction, and terminal number in the x direction. ie. action = (state * n_terminals + terminal). The terminal number is given by (for terminals only): tok_number - n_nonterminals - 3 so we have error = 0 terminals = 1..n %eof = n+1 % Goto Table The goto table is indexed by nonterminal number (without %starts), ie (state * (n_nonterminals-s)) + tok_number - s ----------------------------------------------------------------------------- Generating the closure of a set of LR(0) items Precalculate the rule closure for each non-terminal in the grammar, using a memo table so that no work is repeated. > precalcClosure0 :: Grammar e -> Name -> RuleList > precalcClosure0 g = > \n -> maybe [] id (lookup n info') > where > > info' :: [(Name, RuleList)] > info' = map (\(n,rules) -> (n,map (\rule -> Lr0 rule 0) (IntSet.toAscList rules))) info > info :: [(Name, IntSet)] > info = mkClosure (==) (\f -> map (follow f) f) > (map (\nt -> (nt,IntSet.fromList (lookupProdsOfName g nt))) nts) > follow :: [(Name, IntSet)] -> (Name, IntSet) -> (Name, IntSet) > follow f (nt,rules) = (nt, unionIntMap (followNT f) rules `IntSet.union` rules) > followNT :: [(Name, IntSet)] -> Int -> IntSet > followNT f rule = > case findRule g rule 0 of > Just nt | nt >= firstStartTok && nt < fst_term -> > maybe (error "followNT") id (lookup nt f) > _ -> IntSet.empty > nts = non_terminals g > fst_term = first_term g > closure0 :: Grammar e -> (Name -> RuleList) -> Set Lr0Item -> Set Lr0Item > closure0 g closureOfNT set = Set.foldr addRules Set.empty set > where > last_nonterm = MkName $ getName (first_term g) - 1 > addRules rule set' = Set.union (Set.fromList (rule : closureOfRule rule)) set' > > closureOfRule (Lr0 rule dot) = > case findRule g rule dot of > (Just nt) | nt >= firstStartTok && nt <= last_nonterm > -> closureOfNT nt > _ -> [] ----------------------------------------------------------------------------- Generating the closure of a set of LR(1) items > closure1 :: Grammar e -> ([Name] -> NameSet) -> [Lr1Item] -> [Lr1Item] > closure1 g first set > = fst (mkClosure (\(_,new) _ -> null new) addItems ([],set)) > where > last_nonterm = MkName $ getName (first_term g) - 1 > addItems :: ([Lr1Item],[Lr1Item]) -> ([Lr1Item],[Lr1Item]) > addItems (old_items, new_items) = (new_old_items, new_new_items) > where > new_old_items = new_items `union_items` old_items > new_new_items = subtract_items > (foldr union_items [] (map fn new_items)) > new_old_items > fn :: Lr1Item -> [Lr1Item] > fn (Lr1 rule dot as) = case drop dot lhs of > (nt:beta) | nt >= firstStartTok && nt <= last_nonterm -> > let terms = NameSet.delete catchTok $ -- the catch token is always shifted and never reduced (see pop_items) > unionNameMap (\a -> first (beta ++ [a])) as > in > [ (Lr1 rule' 0 terms) | rule' <- lookupProdsOfName g nt ] > _ -> [] > where Production _name lhs _ _ = lookupProdNo g rule Subtract the first set of items from the second. > subtract_items :: [Lr1Item] -> [Lr1Item] -> [Lr1Item] > subtract_items items1 items2 = foldr (subtract_item items2) [] items1 These utilities over item sets are crucial to performance. Stamp on overloading with judicious use of type signatures... > subtract_item :: [Lr1Item] -> Lr1Item -> [Lr1Item] -> [Lr1Item] > subtract_item [] i result = i : result > subtract_item ((Lr1 rule dot as):items) i@(Lr1 rule' dot' as') result = > case compare rule' rule of > LT -> i : result > GT -> carry_on > EQ -> case compare dot' dot of > LT -> i : result > GT -> carry_on > EQ -> case NameSet.difference as' as of > bs | NameSet.null bs -> result > | otherwise -> (Lr1 rule dot bs) : result > where > carry_on = subtract_item items i result Union two sets of items. > union_items :: [Lr1Item] -> [Lr1Item] -> [Lr1Item] > union_items is [] = is > union_items [] is = is > union_items (i@(Lr1 rule dot as):is) (i'@(Lr1 rule' dot' as'):is') = > case compare rule rule' of > LT -> drop_i > GT -> drop_i' > EQ -> case compare dot dot' of > LT -> drop_i > GT -> drop_i' > EQ -> (Lr1 rule dot (as `NameSet.union` as')) : union_items is is' > where > drop_i = i : union_items is (i':is') > drop_i' = i' : union_items (i:is) is' ----------------------------------------------------------------------------- goto(I,X) function The input should be the closure of a set of kernel items I together with a token X (terminal or non-terminal. Output will be the set of kernel items for the set of items goto(I,X) > gotoClosure :: Grammar e -> Set Lr0Item -> Name -> Set Lr0Item > gotoClosure gram i x = unionMap fn i > where > fn (Lr0 rule_no dot) = > case findRule gram rule_no dot of > Just t | x == t -> Set.singleton (Lr0 rule_no (dot+1)) > _ -> Set.empty ----------------------------------------------------------------------------- Generating LR0 Item sets The item sets are generated in much the same way as we find the closure of a set of items: we use two sets, those which have already generated more sets, and those which have just been generated. We keep iterating until the second set is empty. The addItems function is complicated by the fact that we need to keep information about which sets were generated by which others. > genLR0items :: Grammar e -> (Name -> RuleList) -> [ItemSetWithGotos] > genLR0items g precalcClosures > = fst (mkClosure (\(_,new) _ -> null new) > addItems > (([],startRules))) > where > n_starts = length (starts g) > startRules :: [Set Lr0Item] > startRules = [ Set.singleton (Lr0 rule 0) | rule <- [0..n_starts] ] > tokens = non_terminals g ++ terminals g > addItems :: ([ItemSetWithGotos], [Set Lr0Item]) > -> ([ItemSetWithGotos], [Set Lr0Item]) > > addItems (oldSets,newSets) = (newOldSets, reverse newNewSets) > where > > newOldSets = oldSets ++ (zip newSets intgotos) > itemSets = map fst oldSets ++ newSets First thing to do is for each set in I in newSets, generate goto(I,X) for each token (terminals and nonterminals) X. > gotos :: [[(Name,Set Lr0Item)]] > gotos = map (filter (not . Set.null . snd)) > (map (\i -> let i' = closure0 g precalcClosures i in > [ (x,gotoClosure g i' x) | x <- tokens ]) newSets) Next, we assign each new set a number, which is the index of this set in the list of sets comprising all the sets generated so far plus those generated in this iteration. We also filter out those sets that are new, i.e. don't exist in the current list of sets, so that they can be added. We also have to make sure that there are no duplicate sets in the *current* batch of goto(I,X) sets, as this could be disastrous. I think I've squished this one with the '++ reverse newSets' in numberSets. numberSets is built this way so we can use it quite neatly with a foldr. Unfortunately, the code's a little opaque. > numberSets > :: [(Name,Set Lr0Item)] > -> (Int, > [[(Name,Int)]], > [Set Lr0Item]) > -> (Int, [[(Name,Int)]], [Set Lr0Item]) > > numberSets [] (i,gotos',newSets') = (i,([]:gotos'),newSets') > numberSets ((x,gotoix):rest) (i,g':gotos',newSets') > = numberSets rest > (case indexInto 0 gotoix (itemSets ++ reverse newSets') of > Just j -> (i, ((x,j):g'):gotos', newSets') > Nothing -> (i+1,((x,i):g'):gotos', gotoix:newSets')) > numberSets _ _ = error "genLR0items/numberSets: Unhandled case" Finally, do some fiddling around to get this all in the form we want. > intgotos :: [[(Name,Int)]] > newNewSets :: [Set Lr0Item] > (_, ([]:intgotos), newNewSets) = > foldr numberSets (length newOldSets, [[]], []) gotos > indexInto :: Eq a => Int -> a -> [a] -> Maybe Int > indexInto _ _ [] = Nothing > indexInto i x (y:ys) | x == y = Just i > | otherwise = let j = i + 1 in j `seq` indexInto j x ys ----------------------------------------------------------------------------- Computing propagation of lookaheads ToDo: generate this info into an array to be used in the subsequent calcLookaheads pass. > propLookaheads > :: Grammar e > -> [ItemSetWithGotos] -- ^ LR(0) kernel sets > -> ([Name] -> NameSet) -- ^ First function > -> ( > [(Int, Lr0Item, NameSet)], -- spontaneous lookaheads > Array Int [(Lr0Item, Int, Lr0Item)] -- propagated lookaheads > ) > propLookaheads gram sets first = (concat s, array (0,length sets - 1) > [ (a,b) | (a,b) <- p ]) > where > (s,p) = unzip (zipWith propLASet sets [0..]) > propLASet :: (Set Lr0Item, [(Name, Int)]) -> Int -> ([(Int, Lr0Item, NameSet)],(Int,[(Lr0Item, Int, Lr0Item)])) > propLASet (set,goto) i = (start_spont ++ concat s', (i, concat p')) > where > (s',p') = unzip (map propLAItem (Set.toAscList set)) > -- spontaneous EOF lookaheads for each start state & rule... > start_info :: [(String, Name, Name, Bool)] > start_info = starts gram > start_spont :: [(Int, Lr0Item ,NameSet)] > start_spont = [ (start, (Lr0 start 0), > NameSet.singleton (startLookahead gram partial)) > | (start, (_,_,_,partial)) <- > zip [0..] start_info] > propLAItem :: Lr0Item -> ([(Int, Lr0Item, NameSet)], [(Lr0Item, Int, Lr0Item)]) > propLAItem item@(Lr0 rule dot) = (spontaneous, propagated) > where > lookupGoto msg x = maybe (error msg) id (lookup x goto) > j = closure1 gram first [Lr1 rule dot (NameSet.singleton dummyTok)] > spontaneous :: [(Int, Lr0Item, NameSet)] > spontaneous = do > (Lr1 rule' dot' ts) <- j > let ts' = NameSet.delete dummyTok ts > guard (not $ NameSet.null ts') > maybeToList $ do r <- findRule gram rule' dot' > return ( lookupGoto "spontaneous" r > , Lr0 rule' (dot' + 1) > , ts' ) > propagated :: [(Lr0Item, Int, Lr0Item)] > propagated = do > (Lr1 rule' dot' ts) <- j > guard $ NameSet.member dummyTok ts > maybeToList $ do r <- findRule gram rule' dot' > return ( item > , lookupGoto "propagated" r > , Lr0 rule' (dot' + 1) ) The lookahead for a start rule depends on whether it was declared with %name or %partial: a %name parser is assumed to parse the whole input, ending with EOF, whereas a %partial parser may parse only a part of the input: it accepts when the error token is found. > startLookahead :: Grammar e -> Bool -> Name > startLookahead gram partial = if partial then errorTok else eof_term gram ----------------------------------------------------------------------------- Calculate lookaheads Special version using a mutable array: > calcLookaheads > :: Int -- number of states > -> [(Int, Lr0Item, NameSet)] -- spontaneous lookaheads > -> Array Int [(Lr0Item, Int, Lr0Item)] -- propagated lookaheads > -> Array Int [(Lr0Item, NameSet)] > calcLookaheads n_states spont prop > = runST $ do > arr <- newArray (0,n_states) [] > propagate arr (fold_lookahead spont) > freeze arr > where > propagate :: STArray s Int [(Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] -> ST s () > propagate _ [] = return () > propagate arr new = do > let > items = [ (i,item'',s) | (j,item,s) <- new, > (item',i,item'') <- prop ! j, > item == item' ] > new_new <- get_new arr items [] > add_lookaheads arr new > propagate arr new_new This function is needed to merge all the (set_no,item,name) triples into (set_no, item, set name) triples. It can be removed when we get the spontaneous lookaheads in the right form to begin with (ToDo). > add_lookaheads :: STArray s Int [(Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] > -> ST s () > add_lookaheads arr = mapM_ $ \(i,item,s) > -> do las <- readArray arr i > writeArray arr i (add_lookahead item s las) > get_new :: STArray s Int [(Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] > -> [(Int, Lr0Item, NameSet)] > -> ST s [(Int, Lr0Item, NameSet)] > get_new _ [] new = return new > get_new arr (l@(i,_item,_s):las) new = do > state_las <- readArray arr i > get_new arr las (get_new' l state_las new) > add_lookahead :: Lr0Item -> NameSet -> [(Lr0Item,NameSet)] -> > [(Lr0Item,NameSet)] > add_lookahead item s [] = [(item,s)] > add_lookahead item s (m@(item',s') : las) > | item == item' = (item, s `NameSet.union` s') : las > | otherwise = m : add_lookahead item s las > get_new' :: (Int,Lr0Item,NameSet) -> [(Lr0Item,NameSet)] -> > [(Int,Lr0Item,NameSet)] -> [(Int,Lr0Item,NameSet)] > get_new' l [] new = l : new > get_new' l@(i,item,s) ((item',s') : las) new > | item == item' = > let s'' = s NameSet.\\ s' in > if NameSet.null s'' then new else (i,item,s'') : new > | otherwise = > get_new' l las new > fold_lookahead :: [(Int,Lr0Item,NameSet)] -> [(Int,Lr0Item,NameSet)] > fold_lookahead = > map (\cs@(((a,b),_):_) -> (a,b,NameSet.unions $ map snd cs)) . > groupBy ((==) `on` fst) . > sortBy (compare `on` fst) . > map (\(a,b,c) -> ((a,b),c)) ----------------------------------------------------------------------------- Merge lookaheads > -- TODO needs better name > type Lr1State = ([Lr1Item], [(Name, Int)]) Stick the lookahead info back into the state table. > mergeLookaheadInfo > :: Array Int [(Lr0Item, NameSet)] -- ^ lookahead info > -> [ItemSetWithGotos] -- ^ state table > -> [Lr1State] > mergeLookaheadInfo lookaheads sets > = zipWith mergeIntoSet sets [0..] > where > mergeIntoSet :: ItemSetWithGotos -> Int -> Lr1State > mergeIntoSet (items, goto) i > = (map mergeIntoItem (Set.toAscList items), goto) > where > mergeIntoItem :: Lr0Item -> Lr1Item > mergeIntoItem item@(Lr0 rule dot) = Lr1 rule dot la > where la = case [ s | (item',s) <- lookaheads ! i, > item == item' ] of > [] -> NameSet.empty > [x] -> x > _ -> error "mergIntoItem" ----------------------------------------------------------------------------- Generate the goto table This is pretty straightforward, given all the information we stored while generating the LR0 sets of items. Generating the goto table doesn't need lookahead info. > genGotoTable :: Grammar e -> [ItemSetWithGotos] -> GotoTable > genGotoTable g sets = gotoTable > where > Grammar{ first_nonterm = fst_nonterm, > first_term = fst_term, > non_terminals = non_terms } = g > > -- goto array doesn't include %start symbols > gotoTable = listArray (0,length sets-1) > [ > (array (fst_nonterm, MkName $ getName fst_term - 1) [ > (n, maybe NoGoto Goto (lookup n goto)) > | n <- non_terms, > n >= fst_nonterm, n < fst_term ]) > | (_set,goto) <- sets ] ----------------------------------------------------------------------------- Generate the action table > genActionTable :: Grammar e -> ([Name] -> NameSet) -> > [Lr1State] -> ActionTable > genActionTable g first sets = actionTable > where > Grammar { first_term = fst_term, > terminals = terms, > starts = starts', > priorities = prios } = g > n_starts = length starts' > isStartRule rule = rule < n_starts -- a bit hacky, but it'll do for now > term_lim = (head terms,last terms) > actionTable = array (0,length sets-1) > [ (set_no, accumArray res > LR'Fail term_lim > (possActions goto set)) > | ((set,goto),set_no) <- zip sets [0..] ] > possAction goto _set (Lr1 rule pos la) = > case findRule g rule pos of > Just t | t >= fst_term || t == errorTok || t == catchTok -> > let f j = (t,LR'Shift j p) > p = maybe No id (lookup t prios) > in map f $ maybeToList (lookup t goto) > Nothing > | isStartRule rule > -> let (_,_,_,partial) = starts' !! rule in > [ (startLookahead g partial, LR'Accept{-'-}) ] > | otherwise > -> let Production _ _ _ p = lookupProdNo g rule in > NameSet.toAscList la `zip` repeat (LR'Reduce rule p) > _ -> [] > possActions goto coll = do item <- closure1 g first coll > possAction goto coll item These comments are now out of date! /JS Here's how we resolve conflicts, leaving a complete record of the conflicting actions in an LR'Multiple structure for later output in the info file. Shift/reduce conflicts are always resolved as shift actions, and reduce/reduce conflicts are resolved as a reduce action using the rule with the lowest number (i.e. the rule that comes first in the grammar file.) NOTES on LR'MustFail: this was introduced as part of the precedence parsing changes. The problem with LR'Fail is that it is a soft failure: we sometimes substitute an LR'Fail for an LR'Reduce (eg. when computing default actions), on the grounds that an LR'Fail in this state will also be an LR'Fail in the goto state, so we'll fail eventually. This may not be true with precedence parsing, though. If there are two non-associative operators together, we must fail at this point rather than reducing. Hence the use of LR'MustFail. NOTE: on (LR'Multiple as a) handling PCC [sep04] has changed this to have the following invariants: * the winning action appears only once, in the "a" slot * only reductions appear in the "as" list * there are no duplications This removes complications elsewhere, where LR'Multiples were building up tree structures... > res LR'Fail x = x > res x LR'Fail = x > res LR'MustFail _ = LR'MustFail > res _ LR'MustFail = LR'MustFail > res x x' | x == x' = x > res (LR'Accept) _ = LR'Accept > res _ (LR'Accept) = LR'Accept > res (LR'Multiple as x) (LR'Multiple bs x') > | x == x' = LR'Multiple (nub $ as ++ bs) x > -- merge dropped reductions for identical action > | otherwise > = case res x x' of > LR'Multiple cs a > | a == x -> LR'Multiple (nub $ x' : as ++ bs ++ cs) x > | a == x' -> LR'Multiple (nub $ x : as ++ bs ++ cs) x' > | otherwise -> error "failed invariant in resolve" > -- last means an unexpected change > other -> other > -- merge dropped reductions for clashing actions, but only > -- if they were S/R or R/R > res a@(LR'Multiple _ _) b = res a (LR'Multiple [] b) > res a b@(LR'Multiple _ _) = res (LR'Multiple [] a) b > -- leave cases above to do the appropriate merging > res a@(LR'Shift {}) b@(LR'Reduce {}) = res b a > res a@(LR'Reduce _ p) b@(LR'Shift _ p') > = case (p,p') of > (PrioLowest,PrioLowest) -> LR'MustFail > (_,PrioLowest) -> a > (PrioLowest,_) -> b > (No,_) -> LR'Multiple [a] b -- shift wins > (_,No) -> LR'Multiple [a] b -- shift wins > (Prio c i, Prio _ j) > | i < j -> b > | i > j -> a > | otherwise -> > case c of > LeftAssoc -> a > RightAssoc -> b > None -> LR'MustFail > res a@(LR'Reduce r p) b@(LR'Reduce r' p') > = case (p,p') of > (PrioLowest,PrioLowest) -> > LR'Multiple [a] b -- give to earlier rule? > (_,PrioLowest) -> a > (PrioLowest,_) -> b > (No,_) -> LR'Multiple [a] b -- give to earlier rule? > (_,No) -> LR'Multiple [a] b > (Prio _ i, Prio _ j) > | i < j -> b > | j > i -> a > | r < r' -> LR'Multiple [b] a > | otherwise -> LR'Multiple [a] b > res _ _ = error "conflict in resolve" ----------------------------------------------------------------------------- Count the conflicts > countConflicts :: ActionTable -> (Array Int (Int,Int), (Int,Int)) > countConflicts action > = (conflictArray, foldl' (\(a,b) (c,d) -> let ac = a + c; bd = b + d in ac `seq` bd `seq` (ac,bd)) (0,0) conflictList) > > where > > conflictArray = listArray (Array.bounds action) conflictList > conflictList = map countConflictsState (assocs action) > > countConflictsState (_state, actions) > = foldr countMultiples (0,0) (elems actions) > where > countMultiples (LR'Multiple (_:_) (LR'Shift{})) (sr,rr) > = (sr + 1, rr) > countMultiples (LR'Multiple (_:_) (LR'Reduce{})) (sr,rr) > = (sr, rr + 1) > countMultiples (LR'Multiple _ _) _ > = error "bad conflict representation" > countMultiples _ c = c ----------------------------------------------------------------------------- > findRule :: Grammar e -> Int -> Int -> Maybe Name > findRule g rule dot = listToMaybe (drop dot lhs) > where Production _ lhs _ _ = lookupProdNo g rule happy-lib-2.1.7/tabular/src/Happy/Tabular/NameSet.hs0000644000000000000000000000275207346545000020371 0ustar0000000000000000{-# LANGUAGE ScopedTypeVariables #-} module Happy.Tabular.NameSet ( -- * Set type NameSet (..), -- * Construction empty, singleton, fromList, -- * Deletion delete, -- * Query member, null, -- * Combine union, unions, difference, (\\), -- * Folds foldr, -- * Conversion -- ** List toAscList ) where import Prelude hiding (foldr, null) import Data.Coerce import Data.IntSet (IntSet) import qualified Data.IntSet as IntSet import Happy.Grammar newtype NameSet = MkNameSet IntSet deriving (Read, Show, Eq, Ord) -- empty :: NameSet empty = coerce IntSet.empty singleton :: Name -> NameSet singleton = coerce IntSet.singleton fromList :: [Name] -> NameSet fromList = coerce IntSet.fromList -- delete :: Name -> NameSet -> NameSet delete = coerce IntSet.delete -- member :: Name -> NameSet -> Bool member = coerce IntSet.member null :: NameSet -> Bool null = coerce IntSet.null -- union :: NameSet -> NameSet -> NameSet union = coerce IntSet.union unions :: [NameSet] -> NameSet unions = coerce . IntSet.unions . fmap coerce difference :: NameSet -> NameSet -> NameSet difference = coerce IntSet.difference (\\) :: NameSet -> NameSet -> NameSet (\\) = coerce (IntSet.\\) -- foldr :: forall b. (Name -> b -> b) -> b -> NameSet -> b foldr = coerce (IntSet.foldr :: (Int -> b -> b) -> b -> IntSet -> b) -- toAscList :: NameSet -> [Name] toAscList = coerce IntSet.toAscList