Advent
The goal of this project is to get all 500 currently available stars in the form of one single Idris application, and thoroughly document the results as literate Idris files.
Authors Note
This entire book is a single literate code base, the source code is available at https://git.stranger.systems/Idris/advent.
The solutions contained in this project are intended to be read in sequential order, though can reasonably be read in any order if you have a good level of familiarity with more advanced functional programming topics.
The solutions will involve progressively more advanced topics as day and year number increase, though I try not to introduce too much within the scope of any one day.
Suggestions and other feedback are highly welcome, please reach out to me via any platform you know me on, or send an email to the ~thatonelutenist/public-inbox mailing list on source hut.
While this project is intended to read more like a book, while it is still a work in progress, you can follow its development as a psuedo-blog by subscribing to the rss feed for the repository in your feed reader: https://git.stranger.systems/Idris/advent.rss.
Index of non-day modules
-
Provides data structures for structuring the division of the project into years, days, and parts.
Provides the Runner
based command line interface for running a selected day's
solution.
-
Provides extensions of the functionality of the standard library and external libraries. Extensions to the standard library are in the base of this module.
-
Extend the functionality of the effects included in the eff library
-
Provide views that enable recursively pattern matching numbers as lists of digits, in both ascending and descending order of significance.
-
-
Provider wrappers over the standard library
IOArray
type to make them more ergonomic to use. -
Effectful parser mini-library
-
Effectful parser API
-
Internal state of a parser
-
Parsers for numerical values in multiple bases
JSON Parser
-
Index of years and days
- 2015
-
Warm up problem, breaks in our new runner and not much else interesting.
-
An early hint of effectful parsing.
-
Peculiarities of writing mutually recursive functions in dependently typed languages.
-
Basic FFI to openssl to steal its MD5 function for Idris's use.
-
First introduction to views and dependent pattern matching1.
-
Naive approach to handling the first 2d grid problem.
-
Introduces dependent maps and indexed type families.
-
Proper effectful parsers and non-determinism in effect stacks.
-
Naive approach to handling the first graph traversal problem.
-
Introduce our
Digits
, dependent pattern matching on integers as lists of digits. -
Introduces refinement types
-
New Parser Effect stack and DLists
-
Naive ring buffer and
parameters
blocks2 -
Introduction to streams, infinite collections of data
-
References
Idris 2 Manual: Views and the "with" rule
https://idris2.readthedocs.io/en/latest/tutorial/modules.html#parameterised-blocks-parameters-blocks
Unified interface for any day from any year
This module provides some basic data types for building an application containing all my solutions to all of the parts of all the days across all of the years.
This provides a defensively built API for constructing the overall Advent
data
structure that prevents registering a day/year twice or registering days/years
out of order.
module Runner
import Control.Eff
import Structures.Dependent.FreshList
import public Util.Eff
%default total
Effectful Parts
The solution to each part of a day is run as an effectful computation, and as
the effect stack is meant to be the same across both parts, only varying in the
type of the error value for the Except
effect, we construct a type level
function to have a single source of truth. The err
type can be any type with a
Show
implementation, but that constraint will be tacked on in the next step.
The Logger
effect is provided for logging, and a Reader
effect is provided
to pass in the input, to make the top level API a little bit cleaner. IO
is
also provided, even though the part solutions themselves shouldn't really be
doing any IO, this will come in handy if a part needs IO
for performance
reasons.
||| The effect type is the same in boths parts one and two, modulo potentially
||| different error types, so we calucate it here
public export
PartEff : (err : Type) -> List (Type -> Type)
PartEff err =
[IO, Except err, Logger, ReaderL "input" String]
The Day
Record
The Day
type groups together an effectful part1
computation, an optional
effectful part2
computation, and the day number, with some type wrangling to
get the type system out of our way.
part1
and part2
are allowed independent output and error types, and this
record captures Show
implementations for those output and error types so that
we can display them in Main
, where the Day
is consumed, without having to
actually know what the types are.
It is often useful to pass a bit of context, such as the data structures
resulting from parsing, between part1
and part2
. This is achieved through
the erased ctx
type, which is totally opaque to the runner. The code in Main
will provide the value of the ctx
type produced as part of the output of
part1
and as the input of part2
.
||| Model solving a single day
public export
record Day where
constructor MkDay
day : Nat
{0 out1, out2, err1, err2 : Type}
{auto showOut1 : Show out1}
{auto showOut2 : Show out2}
{auto showErr1 : Show err1}
{auto showErr2 : Show err2}
part1 : Eff (PartEff err1) (out1, ctx)
part2 : Maybe (ctx -> Eff (PartEff err2) out2)
%name Day day, day2, day3
Smarter Constructors
The default MkDay
constructor is slightly cumbersome to use, always requiring
something for the part2
slot, even if there isn't a part 2 yet, and
requiring that part2
be wrapped in a Just
when there is one. We provide a
pair of constructors for the case where there is only a part1
, as well as one
for when there is a part1
and a part2
.
namespace Day
First
The First
constructor only accepts a part1
, it does the work of filling in
part2
with Nothing
and setting all of part2
's type arguments to ()
.
||| Constructor for a day with only part one ready to run
public export
First : Show err => Show out =>
(day : Nat) -> (part1 : Eff (PartEff err) (out, ctx'))
-> Day
First day part1 =
MkDay day part1 Nothing {out2 = ()} {err2 = ()}
Both
The Both
constructor does less heavy lifting, the only thing it needs to do is
wrap part2
in a Just
.
||| Constructor for a day with both parts ready to run
public export
Both : Show o1 => Show o2 => Show e1 => Show e2 =>
(day : Nat) -> (part1 : Eff (PartEff e1) (o1, ctx')) ->
(part2 :ctx' -> Eff (PartEff e2) o2)
-> Day
Both day part1 part2 =
MkDay day part1 (Just part2)
Freshness
We will be using a Fresh List from the structures package to build our API defensively against duplicate days and cosmetically annoying out of order day registration. A Fresh List structurally only allows you to prepend/cons an element onto it when it satisfies some freshness criteria relative to the elements already contained in the list.
We compare the day numbers of the two Day
s using the less-than(<
)
relationship. Since we are operating on the start of the list when this
comparison takes place, this enforces, through type checking, that the resulting
Fresh List of Day
s is sorted in ascending order and that no two Day
s have
the same day number.
||| Freshness criteria for days
|||
||| A day is fresh compared to another if the day number of the former day is
||| strictly less than the day number of the latter day
|||
||| This ensures that the days list is always in incrimenting sorted order
||| (since we are consing to the front of the list) with no duplicate days
public export
FreshDay : Day -> Day -> Bool
FreshDay x y = x.day < y.day
The Year
Record
The Year
record collects a number of Day
s into a single Fresh List for the
year, also containing the year number for this collection.
||| Collect all the days in a given year
public export
record Year where
constructor MkYear
year : Nat
days : FreshList FreshDay Day
%name Year year, year2, year3
Freshness
Much like Day
s are stored in a FreshList
in Year
, Year
s will be stored
in a FreshList
in Advent
, so we need to provide a freshness criteria for
Year
as well.
We do so by applying the less-than relationship against the year number of the
two Years
, for the same reasons and with the same results as with FreshDay
.
||| Freshness criteria for years
|||
||| A year is fresh compared to another if the year number of the former year is
||| strictly less than the year number of the latter year
|||
||| This ensures that the years list is always in incrimenting sorted order
||| (since we are consing to the front of the list) with no duplicate years
public export
FreshYear : Year -> Year -> Bool
FreshYear x y = x.year < y.year
The Advent
Record
The Advent
record collects a number of Year
s in much the same way that
Year
collects a number of days.
||| Collect all years
public export
record Advent where
constructor MkAdvent
years : FreshList FreshYear Year
%name Advent advent, advent2, advent3
Methods
namespace Advent
locate
This is a utility function that searches the FreshList
of Year
s for the
provided year number, then searches the resulting Year
, if one exists, for the
provided day number.
We don't have to worry about potentially encountering duplicates here because of the freshness restriction.
||| Attempt to locate `Day` entry corresponding to the provided day and year numbers
export
locate : (year, day : Nat) -> Advent -> Maybe Day
locate year day advent = do
year <- find (\x => x.year == year) advent.years
find (\x => x.day == day) year.days
Advent of Code
This module provides a command line interface for running the solution to a
specific day for a specific year, run as advent $YEAR $DAY
.
This runner will automatically locate the appropriate input file and then run
the selected day's solution against it. See Runner for the
definitions of the Day
, Year
, and Advent
data types.
module Main
import System
import System.Path
import System.File
import System.Directory
import System.Console.GetOpt
import Data.String
import Data.Vect
import Control.Eff
import Derive.Prelude
import Structures.Dependent.FreshList
import Runner
import Util
import Util.Eff
import Years.Y2015;
%language ElabReflection
%default total
Command Line Interface
Since this is a simple application, I am using GetOpt
from contrib
for
argument parsing. Here we define our options list, a data type to describe our
flags, and the usage message header.
data Flag = UseExample | Verbose
%runElab derive "Flag" [Eq, Show]
options : List (OptDescr Flag)
options =
[
MkOpt ['u'] ["use-example"] (NoArg UseExample) "Use example input instead of main input"
, MkOpt ['v'] ["verbose"] (NoArg Verbose) "Enable logging to stderr"
]
header : String
header = "Usage: advent YEAR DAY [OPTION...]"
Error Type
We will be using one unified error type for the entire Main
module, but we
must go ahead and define it before we go any further with command line arguments
parsing, as we are about to do some things that might fail.
data Error : Type where
OptionsError : (errs : List String) -> Error
ArgumentsError : (count : Nat) -> Error
InvalidYear : (year : String) -> Error
InvalidDay : (year : Nat) -> (day : String) -> Error
NoSuchDay : (year : Nat) -> (day : Nat) -> Error
NoCurrentDir : Error
InputRead : (path : String) -> (err : FileError) -> Error
SolveError : Show e => (year, day, part : Nat) -> (err : e) -> Error
%name Error err
A Show
implementation for Error
is provided, hidden in this document for
brevity.
Extract the year and day
After peeling off our options from the arguments using GetOpt
, we will be left
with a list of non-option arguments, which we must then parse into our year
number/day number pair. We perform some pattern matching to make sure the list
is the correct length, and then parse the individual integers, throwing an error
if anything goes wrong
||| Convert the non-option arguments into a Year/Day pair
argumentsToPair : Has (Except Error) fs =>
List String -> Eff fs (Nat, Nat)
argumentsToPair [] = throw $ ArgumentsError 0
argumentsToPair [x] = throw $ ArgumentsError 1
argumentsToPair [year, day] = do
year <- note (InvalidYear year) $ parsePositive year
day <- note (InvalidDay year day) $ parsePositive day
pure (year, day)
argumentsToPair xs = throw $ ArgumentsError (length xs)
The Advent
Construct our top-level Advent
record from the imported Year
modules. The
argument to the MkAdvent
constructor is a FreshList
, with a freshness
criteria ensuring that the list is in ascending order and does not include any
duplicate Year
s.
advent : Advent
advent = MkAdvent [
y2015
]
The Hard Part
Now we must glue everything together, parse our arguments, handle any supplied
options, locate the relevant year and day, locate the corresponding input file,
read it, and then run the selected Day
against it.
Because we are building our application around effects, the actual core of the
application is the effectful start
computation, that the main
function will
then apply IO
based handlers to.
Our reader and writer aren't ready yet, we need to do some setup first, so the top level type signature doesn't include them, so we'll add them to the effects list as they become available.
covering
||| Actual main, as an effectful computation
start : Eff [IO, Except Error] ()
start = do
Parsing the arguments
Feed the augments from System.getArgs
into getOpt
, making sure to drop the
name the binary was invoked with, then check for any errors in the result,
throwing our own error if any are encountered.
-- Read and parse arguments/options
args <- getArgs
let opts = getOpt Permute options (drop 1 args)
when (not . null $ opts.errors) $
throw (OptionsError opts.errors)
Use our argumentsToPair
function to extract the selected year and day numbers
from the positional arguments, implicitly throwing an error if there are any
failures doing so.
(year, day_n) <- argumentsToPair opts.nonOptions
Handling the arguments and finding the input
Handle the verbosity flag, if it is set, hook our logger up to stderr, otherwise
blackhole the logs. Afterwards, use logHandler
to introduce the Logger
into
the effects list.
-- If the verbose flag is set, hook up the logging writer to stderr
let verbosity = count (== Verbose) opts.options
let logHandler : Eff [IO, Except Error, Logger] () -> Eff [IO, Except Error] () =
handleLog (Warn >+ verbosity)
-- Add the logging writer to the effect
logHandler $ do
Find the current directory and append to it to find the input file. If the
--use-example/-u
flag is set, look for the input file at
inputs/examples/$year/$day
, otherwise look for it at inputs/real/$year/$day
.
After we've located it, attempt to read from it, throwing an InputRead
error
if any errors occur, then insert the contents into our Reader
and inject it
into the effects list.
-- Locate and read in the input file
Just cwd <- currentDir | _ => throw NoCurrentDir
let cwd = parse cwd
let use_example = any (== UseExample) opts.options
let base_dir = cwd /> "inputs" /> (if use_example then "examples" else "real");
let input_path = show $ base_dir /> (show year) /> (show day_n)
info "Reading input from \{input_path}"
Right contents <- readFile input_path | Left err => throw $ InputRead input_path err
-- Now add the reader that provides the input to the effect
runReaderAt "input" contents {fs = PartEff Error} $ do
Try and locate the Day
in our Advent
record, throwing an error if we are
unable to locate it
-- Attempt to locate the provided day
Just day <- pure $ locate year day_n advent | _ => throw $ NoSuchDay year day_n
Executing the parts
Run the selected Day
's part 1, wrapping any errors that it returns in a
SolveError
. Collect both the result and the opaque context value, then print
out the result.
-- Run part 1
part_1 <- lift $ runExcept day.part1
(part_1, ctx) <- rethrow . mapLeft (SolveError year day_n 1 @{day.showErr1}) $ part_1
putStrLn "\{show year} day \{show day_n} part 1:"
putStrLn $ unlines . map (" " ++) . lines . show @{day.showOut1} $ part_1
Check and see if the selected Day
has a part 2, if it does, feed the opaque
context value collected from part 1 into it, wrap any errors that it returns in
a SolveError
, then print out the result, then return, closing out the program.
-- Run part 2 if we have it
case day.part2 of
Nothing => pure ()
Just part_2_f => do
part_2 <- lift . runExcept $ part_2_f ctx
part_2 <- rethrow . mapLeft (SolveError year day_n 2 @{day.showErr2}) $ part_2
putStrLn "\{show year} day \{show day_n} part 2:"
putStrLn $ unlines . map (" " ++) . lines . show @{day.showOut2} $ part_2
Helper functions
where
Lower logging into the IO component of the effect
Makes use of Logger
's handleLoggerIO
function to "lower" logging actions
into IO
within the effect.
-- Lowers logging into IO within the effect using the given IO function
handleLog :
Has Logger fs => Has IO (fs - Logger) =>
(max_log : Level) -> Eff fs a -> Eff (fs - Logger) a
handleLog max_log x =
handle
(\msg, f =>
do _ <- send $ handleLoggerIO max_log msg
f $ ignore msg) x
Handle the effectful action as an IO action
Use runEff
to translate the effects in start
's effects list into IO actions,
and return the resulting IO ()
.
The IO
component of the effects list is handled via id
, translating IO
actions into themselves, and we construct a helper function to handle errors as
IO actions by printing the error and crashing the program.
covering
||| Shim main, which simply executes the effectful computation
main : IO ()
main = runEff start [id, handleError]
where
handleError : Except Error a -> IO a
handleError (Err err) = do
printLn err
exitFailure
Utility functions
This module contains functions that extend the functionality of standard data types, other types of utility functionality get their own module
module Util
import Data.SortedSet
import Data.String
import Data.List.Lazy
import Data.List1
import Data.Vect
import Data.Fin
%default total
Foldable
General utility functions for foldables
minBy
||| Get the minimum element of a collection using the provided comparison
||| function and seed value
export
minBy : Foldable f => (cmp : a -> a -> Ordering) -> (acc : a) -> f a -> a
minBy cmp acc x =
foldl
(\acc, e =>
case e `cmp` acc of
LT => e
_ => acc)
acc x
||| Get the maximum element of a collection using the provided comparison
||| function and seed value
export
maxBy : Foldable f => (cmp : a -> a -> Ordering) -> (acc : a) -> f a -> a
maxBy cmp acc x =
foldl
(\acc, e =>
case e `cmp` acc of
GT => e
_ => acc)
acc x
Functions
repeatN
Recursively applies f
to seed
N times
export
repeatN : (times : Nat) -> (f : a -> a) -> (seed : a) -> a
repeatN 0 f seed = seed
repeatN (S times') f seed = repeatN times' f (f seed)
Either
mapLeft
Applies a function to the contents of a Left
if the value of the given
Either
is actually a Left
, otherwise, leaves Right
s untouched.
export
mapLeft : (f : a -> b) -> Either a c -> Either b c
mapLeft f (Left x) = Left (f x)
mapLeft f (Right x) = Right x
List
contains
Returns True
if the list contains the given value
export
contains : Eq a => a -> List a -> Bool
contains x [] = False
contains x (y :: xs) =
if x == y
then True
else contains x xs
rotations
Provides all the 'rotations' of a list.
Example:
rotations [1, 2, 3] == [[1, 2, 3], [3, 1, 2], [2, 3, 1]]
export
rotations : List a -> List (List a)
rotations xs = rotations' (length xs) xs []
where
rotations' : Nat -> List a -> (acc : List (List a)) -> List (List a)
rotations' 0 xs acc = acc
rotations' (S k) [] acc = acc
rotations' (S k) (x :: xs) acc =
let next = xs ++ [x]
in rotations' k next (next :: acc)
permutations
Lazily generate all of the permutations of a list
export
permutations : List a -> LazyList (List a)
permutations [] = pure []
permutations xs = do
(head, tail) <- select xs
tail <- permutations (assert_smaller xs tail)
pure $ head :: tail
where
consSnd : a -> (a, List a) -> (a, List a)
consSnd x (y, xs) = (y, x :: xs)
select : List a -> LazyList (a, List a)
select [] = []
select (x :: xs) = (x, xs) :: map (consSnd x) (select xs)
Vect
permutations
Lazily generate all the permutations of a Vect
export
permutations : Vect n a -> LazyList (Vect n a)
permutations [] = []
permutations [x] = [[x]]
permutations (x :: xs) = do
(head, tail) <- select (x :: xs)
tail <- permutations tail
pure $ head :: tail
where
consSnd : a -> (a, Vect m a) -> (a, Vect (S m) a)
consSnd x (y, xs) = (y, x :: xs)
select : Vect (S m) a -> LazyList (a, Vect m a)
select [y] = [(y, [])]
select (y :: (z :: ys)) =
(y, z :: ys) :: map (consSnd y) (select (z :: ys))
minBy and maxBy
||| Get the minimum element of a non-empty vector by using the provided
||| comparison function
export
minBy : (f : a -> a -> Ordering) -> Vect (S n) a -> a
minBy f (x :: xs) = Foldable.minBy f x xs
||| Get the maximum element of a non-empty vector by using the provided
||| comparison function
export
maxBy : (f : a -> a -> Ordering) -> Vect (S n) a -> a
maxBy f (x :: xs) = Foldable.maxBy f x xs
Convert a list to a vect as a sigma type
export
listToVect : List a -> (n : Nat ** Vect n a)
listToVect xs = (length xs ** fromList xs)
Fin
||| Decriment a Fin, wrapping on overflow
export
unfinS : {n : _} -> Fin n -> Fin n
unfinS FZ = last
unfinS (FS x) = weaken x
Vectors
Define some operations for pairs of numbers, treating them roughly like vectors
Addition and Subtraction
public export
(>+<) : Num a => (x, y : (a, a)) -> (a, a)
(x_1, x_2) >+< (y_1, y_2) = (x_1 + y_1, x_2 + y_2)
public export
(>-<) : Neg a => (x, y : (a, a)) -> (a, a)
(x_1, x_2) >-< (y_1, y_2) = (x_1 - y_1, x_2 - y_2)
Set
Extend Data.SortedSet
length
Count the number of elements in a sorted set
export
length : SortedSet k -> Nat
length x = foldl (\acc, e => acc + 1) 0 x
String
Extend Data.String
isSubstr
Returns True
if needle
is a substring of haystack
We first check to see if the needle is a prefix of the top level haystack, before calling into a tail recursive helper function that peels one character off of the string at a time, checking if the needle is a prefix at each step.
export
isSubstr : (needle, haystack : String) -> Bool
isSubstr needle haystack =
if needle `isPrefixOf` haystack
then True
else isSubstr' haystack
where
isSubstr' : String -> Bool
isSubstr' str with (asList str)
isSubstr' "" | [] = False
isSubstr' (strCons c str) | (c :: x) =
if needle `isPrefixOf` str
then True
else isSubstr' str | x
Lazy List
Cartesian product
Lazily take the cartesian product of two foldables
export
cartProd : Foldable a => Foldable b => a e -> b f -> LazyList (e, f)
cartProd x y =
let y = foldToLazy y
in foldr (\e, acc => combine e y acc) [] x
where
foldToLazy : Foldable a' => a' e' -> LazyList e'
foldToLazy x = foldr (\e, acc => e :: acc) [] x
combine : e -> LazyList f -> LazyList (e, f) -> LazyList (e, f)
combine x [] rest = rest
combine x (y :: ys) rest = (x, y) :: combine x ys rest
Concat
Lazily concatenate a LazyList of LazyLists
export
lazyConcat : LazyList (LazyList a) -> LazyList a
lazyConcat [] = []
lazyConcat (x :: xs) = x ++ lazyConcat xs
Group
Lazily group a LazyList
export
lazyGroup : Eq a => LazyList a -> LazyList (List1 a)
lazyGroup [] = []
lazyGroup (x :: xs) = lazyGroup' xs x (x ::: [])
where
lazyGroup' : LazyList a -> (current : a) -> (acc : List1 a)
-> LazyList (List1 a)
lazyGroup' [] current acc = [acc]
lazyGroup' (y :: ys) current acc@(head ::: tail) =
if y == current
then lazyGroup' ys current (head ::: (y :: tail))
else acc :: lazyGroup (y :: ys)
length
Calculate the length of a LazyList
export
length : LazyList a -> Nat
length = length' 0
where
length' : Nat -> LazyList a -> Nat
length' k [] = k
length' k (x :: xs) = length' (S k) xs
Effects utilities
Contains utility functions extending the functionality of the eff
package.
module Util.Eff
import Control.Eff
import Data.Subset
import Text.ANSI
import System
import System.File
%default total
Logging
Log Levels
Basic enumeration describing log levels, we define some (hidden) utility functions for working with these.
The Other n
log level is restricted to n
greater than 4, to prevent
ambiguity between custom log levels and predefined log levels.
public export
data Level : Type where
Err : Level
Warn : Level
Info : Level
Debug : Level
Trace : Level
Other : (n : Nat) -> {auto _ : n `GT` 4} -> Level
Convert a Level
into a colorized tag
export
levelToTag : Level -> String
levelToTag Err =
show . bolden . show . colored BrightRed $ "[Error]"
levelToTag Warn =
show . bolden . show . colored BrightYellow $ "[Warning]"
levelToTag Info =
show . bolden . show . colored Green $ "[Info]"
levelToTag Debug =
show . bolden . show . colored Magenta $ "[Debug]"
levelToTag Trace =
show . bolden . show . colored Cyan $ "[Trace]"
levelToTag (Other k) =
show . bolden . show . colored BrightWhite $ "[\{show k}]"
Logger effect
This is a basic data structure that captures a lazy log message (so we don't have to pay any of the costs associated with generating the log message when it is filtered)
public export
data Logger : Type -> Type where
Log : (level : Level) -> (msg : Lazy String) -> Logger ()
We'll also provide some basic accessors, and an ignore
function useful for
writing handlers.
export
(.level) : Logger t -> Level
(.level) (Log level msg) = level
export
(.msg) : Logger t -> Lazy String
(.msg) (Log level msg) = msg
export
ignore : Logger t -> t
ignore (Log level msg) = ()
Handler
Because we know that we will only be using logger
in an IO
context, we
aren't currently going to provide a runLogger
or the like, instead we'll
define a function, suitable for use as a runEff
handler, that filters log
messages and prints them to stderr
over IO
.
In the event a log message is filtered out, it's inner message is never inspected, avoiding evaluation.
export
handleLoggerIO :
(max_level : Level) -> Logger t -> IO t
handleLoggerIO max_level x =
if x.level <= max_level
then do
_ <- fPutStrLn stderr "\{levelToTag x.level}: \{x.msg}"
pure . ignore $ x
else pure . ignore $ x
Provide a family of effectful actions that emit log messages at the different log levels.
export
error : Has Logger fs => Lazy String -> Eff fs ()
error x = send $ Log Err x
export
warn : Has Logger fs => Lazy String -> Eff fs ()
warn x = send $ Log Warn x
export
info : Has Logger fs => Lazy String -> Eff fs ()
info x = send $ Log Info x
export
debug : Has Logger fs => Lazy String -> Eff fs ()
debug x = send $ Log Debug x
export
trace : Has Logger fs => Lazy String -> Eff fs ()
trace x = send $ Log Trace x
export
logAt : Has Logger fs => Level -> Lazy String -> Eff fs ()
logAt level x = send $ Log level x
Choose
Lifts any foldable data structure into the Choose effect, enabling the style of nondeterministic programming from the list monad
export
oneOf : Has (Choose) fs => Foldable f =>
f a -> Eff fs a
oneOf x = foldl oneOf' empty x
where
oneOf' : Eff fs a -> a -> Eff fs a
oneOf' acc val = pure val `alt` acc
Lifts any foldable of effectful computations into the Choose effect much the
same as oneOf
export
oneOfM : Has (Choose) fs => Foldable f =>
f (Eff fs a) -> Eff fs a
oneOfM x = foldl alt empty x
Subset
Reorder the first two elements of the first argument of a subset
public export
subsetReorderLeft : Subset (x :: g :: xs) ys -> Subset (g :: x :: xs) ys
subsetReorderLeft (e :: (e' :: subset)) = e' :: e :: subset
Add the same element to both arguments of a subset
public export
subsetConsBoth : {0 xs, ys : List a} -> {0 g : a} -> Subset xs ys
-> Subset (g :: xs) (g :: ys)
subsetConsBoth {xs = []} {ys = ys} [] = [Z]
subsetConsBoth {xs = (x :: xs)} {ys} (e :: subset) =
let rest = S e :: subsetConsBoth subset {g}
in subsetReorderLeft rest
A list is always a subset of itself
public export
subsetReflexive : {xs : List a} -> Subset xs xs
subsetReflexive {xs = []} = []
subsetReflexive {xs = (x :: xs)} =
let rest = subsetReflexive {xs}
in subsetConsBoth rest
If xs
is a subset of ys
, then it is also a subset of ys + g
public export
subsetConsRight : {0 g: _} -> {0 xs, ys: _} -> Subset xs ys -> Subset xs (g :: ys)
subsetConsRight {xs = []} {ys = ys} [] = []
subsetConsRight {xs = (x :: xs)} {ys} (e :: subset) =
S e :: subsetConsRight subset {g}
xs
is always a subset of xs + g
public export
trivialSubset : {0 g : a} -> {xs : List a} -> Subset xs (g :: xs)
trivialSubset = subsetConsRight subsetReflexive
fs - f
is always a subset of f
public export
minusLemma : {fs : List a} -> {0 x : a} -> {auto prf : Has x fs}
-> Subset (fs - x) fs
minusLemma {fs = (_ :: vs)} {prf = Z} = trivialSubset
minusLemma {fs = (w :: vs)} {prf = (S y)} =
let rest = minusLemma {fs = vs} {prf = y}
in subsetConsBoth rest
Effects utilities
Contains utility functions extending the functionality of the eff
package.
module Util.Eff
import Control.Eff
import Data.Subset
import Text.ANSI
import System
import System.File
%default total
Logging
Log Levels
Basic enumeration describing log levels, we define some (hidden) utility functions for working with these.
The Other n
log level is restricted to n
greater than 4, to prevent
ambiguity between custom log levels and predefined log levels.
public export
data Level : Type where
Err : Level
Warn : Level
Info : Level
Debug : Level
Trace : Level
Other : (n : Nat) -> {auto _ : n `GT` 4} -> Level
Convert a Level
into a colorized tag
export
levelToTag : Level -> String
levelToTag Err =
show . bolden . show . colored BrightRed $ "[Error]"
levelToTag Warn =
show . bolden . show . colored BrightYellow $ "[Warning]"
levelToTag Info =
show . bolden . show . colored Green $ "[Info]"
levelToTag Debug =
show . bolden . show . colored Magenta $ "[Debug]"
levelToTag Trace =
show . bolden . show . colored Cyan $ "[Trace]"
levelToTag (Other k) =
show . bolden . show . colored BrightWhite $ "[\{show k}]"
Logger effect
This is a basic data structure that captures a lazy log message (so we don't have to pay any of the costs associated with generating the log message when it is filtered)
public export
data Logger : Type -> Type where
Log : (level : Level) -> (msg : Lazy String) -> Logger ()
We'll also provide some basic accessors, and an ignore
function useful for
writing handlers.
export
(.level) : Logger t -> Level
(.level) (Log level msg) = level
export
(.msg) : Logger t -> Lazy String
(.msg) (Log level msg) = msg
export
ignore : Logger t -> t
ignore (Log level msg) = ()
Handler
Because we know that we will only be using logger
in an IO
context, we
aren't currently going to provide a runLogger
or the like, instead we'll
define a function, suitable for use as a runEff
handler, that filters log
messages and prints them to stderr
over IO
.
In the event a log message is filtered out, it's inner message is never inspected, avoiding evaluation.
export
handleLoggerIO :
(max_level : Level) -> Logger t -> IO t
handleLoggerIO max_level x =
if x.level <= max_level
then do
_ <- fPutStrLn stderr "\{levelToTag x.level}: \{x.msg}"
pure . ignore $ x
else pure . ignore $ x
Provide a family of effectful actions that emit log messages at the different log levels.
export
error : Has Logger fs => Lazy String -> Eff fs ()
error x = send $ Log Err x
export
warn : Has Logger fs => Lazy String -> Eff fs ()
warn x = send $ Log Warn x
export
info : Has Logger fs => Lazy String -> Eff fs ()
info x = send $ Log Info x
export
debug : Has Logger fs => Lazy String -> Eff fs ()
debug x = send $ Log Debug x
export
trace : Has Logger fs => Lazy String -> Eff fs ()
trace x = send $ Log Trace x
export
logAt : Has Logger fs => Level -> Lazy String -> Eff fs ()
logAt level x = send $ Log level x
Choose
Lifts any foldable data structure into the Choose effect, enabling the style of nondeterministic programming from the list monad
export
oneOf : Has (Choose) fs => Foldable f =>
f a -> Eff fs a
oneOf x = foldl oneOf' empty x
where
oneOf' : Eff fs a -> a -> Eff fs a
oneOf' acc val = pure val `alt` acc
Lifts any foldable of effectful computations into the Choose effect much the
same as oneOf
export
oneOfM : Has (Choose) fs => Foldable f =>
f (Eff fs a) -> Eff fs a
oneOfM x = foldl alt empty x
Subset
Reorder the first two elements of the first argument of a subset
public export
subsetReorderLeft : Subset (x :: g :: xs) ys -> Subset (g :: x :: xs) ys
subsetReorderLeft (e :: (e' :: subset)) = e' :: e :: subset
Add the same element to both arguments of a subset
public export
subsetConsBoth : {0 xs, ys : List a} -> {0 g : a} -> Subset xs ys
-> Subset (g :: xs) (g :: ys)
subsetConsBoth {xs = []} {ys = ys} [] = [Z]
subsetConsBoth {xs = (x :: xs)} {ys} (e :: subset) =
let rest = S e :: subsetConsBoth subset {g}
in subsetReorderLeft rest
A list is always a subset of itself
public export
subsetReflexive : {xs : List a} -> Subset xs xs
subsetReflexive {xs = []} = []
subsetReflexive {xs = (x :: xs)} =
let rest = subsetReflexive {xs}
in subsetConsBoth rest
If xs
is a subset of ys
, then it is also a subset of ys + g
public export
subsetConsRight : {0 g: _} -> {0 xs, ys: _} -> Subset xs ys -> Subset xs (g :: ys)
subsetConsRight {xs = []} {ys = ys} [] = []
subsetConsRight {xs = (x :: xs)} {ys} (e :: subset) =
S e :: subsetConsRight subset {g}
xs
is always a subset of xs + g
public export
trivialSubset : {0 g : a} -> {xs : List a} -> Subset xs (g :: xs)
trivialSubset = subsetConsRight subsetReflexive
fs - f
is always a subset of f
public export
minusLemma : {fs : List a} -> {0 x : a} -> {auto prf : Has x fs}
-> Subset (fs - x) fs
minusLemma {fs = (_ :: vs)} {prf = Z} = trivialSubset
minusLemma {fs = (w :: vs)} {prf = (S y)} =
let rest = minusLemma {fs = vs} {prf = y}
in subsetConsBoth rest
Array Wrappers
module Array
import Data.IOArray
import Data.Vect
import Data.List.Lazy
import Data.Fin
import Data.Iterable
import Decidable.Equality
import Control.WellFounded
%default total
%hide Prelude.Types.elem
Provide some wrappers over the standard library IO arrays, using plenty of unsafe operations along the way.
Internal Utility Functions
Unsafely unwrap a Maybe
, making the assumption that it is a Just
.
%unsafe
unwrapUnsafe : Maybe a -> a
unwrapUnsafe x = assert_total $ unwrapUnsafe' x
where
partial
unwrapUnsafe' : Maybe a -> a
unwrapUnsafe' (Just x) = x
Insert an element into an IOArray
at the index described by the given Fin
.
insertPair : IOArray e -> (e, Fin n) -> IO ()
insertPair arr (elem, idx) = do
let idx : Int = cast . finToInteger $ idx
_ <- writeArray arr idx elem
pure ()
Tail safe monadic recursion with lazy lists
Generate all of the fins of a given size, lazily.
allFins : (n : Nat) -> LazyList (Fin n)
allFins 0 = []
allFins (S k) = FZ :: map FS (allFins k)
LazyLists have a size
Sized (LazyList e) where
size [] = 0
size (x :: xs) = S (size xs)
Convert a Vect
to a LazyList
vectLazy : Vect n e -> LazyList e
vectLazy [] = []
vectLazy (x :: xs) = x :: vectLazy xs
LazyLists are iterable
refl : {k : Nat} -> LTE k k
refl = reflexive {x = k}
Iterable (LazyList a) a where
iterM accum done ini seed =
trSized seed ini $
\case Nil => pure . Done . done
h :: t => map (Cont t refl) . accum h
Immutable arrays
Immutable arrays created from vectors that provide constant time indexing and slicing operations.
As an invariant of the the type, every "slot" in the inner IOArray
must always
be a Just
, unsafe operations are performed that depend on that.
||| An immutable array of length `n`
export
data IArray : Nat -> Type -> Type where
MkIArray : (size : Nat) -> (offset : Int) -> (inner : IOArray e)
-> IArray size e
%name IArray xs, ys, zs
Basic Interface
Indexing operation
export
index : (idx : Fin n) -> IArray n e -> e
index idx (MkIArray size offset inner) = unsafePerformIO index'
where
index' : IO e
index' = do
-- Convert the index to an Int for use in the IO array
let idx : Int = cast . finToInteger $ idx
map unwrapUnsafe $ readArray inner (idx + offset)
Getting the length of an IArray
, as well as a proof of correctness for the
length function
export
length : IArray n e -> Nat
length (MkIArray n _ inner) = n
export
lengthCorrect : (array : IArray n e) -> length array = n
lengthCorrect (MkIArray n _ inner) = Refl
Basic conversion operations
export
fromVect : {n : Nat} -> Vect n e -> IArray n e
fromVect xs = unsafePerformIO fromVect'
where
fromVect' : IO $ IArray n e
fromVect' = do
let pairs = zip (vectLazy xs) (allFins n)
array <- newArray (cast n)
forM_ (insertPair array) pairs
pure $ MkIArray n 0 array
export
replicate : (n : Nat) -> e -> IArray n e
replicate n x = unsafePerformIO replicate'
where
insertR : IOArray e -> Fin n -> IO ()
insertR arr idx = do
let idx : Int = cast . finToInteger $ idx
_ <- writeArray arr idx x
pure ()
replicate' : IO $ IArray n e
replicate' = do
array <- newArray (cast n)
forM_ (insertR array) (Array.allFins n)
pure $ MkIArray n 0 array
export
toVect : IArray n e -> Vect n e
toVect xs =
let xs' : IArray (length xs) e = rewrite lengthCorrect xs in xs
values : Vect _ e = map (flip index $ xs') (allFins _)
in rewrite sym $ lengthCorrect xs in values
arrayToList : IArray n e -> List e
arrayToList xs =
let xs' : IArray (length xs) e = rewrite lengthCorrect xs in xs
in map (flip index $ xs') (allFins (length xs))
Unsafely create an array from a lazy list of values, for internal use only. Will
cause crashes if the provided LazyList
isn't the right length.
%unsafe
unsafeFromLazy : {n : Nat} -> LazyList e -> IO (IArray n e)
unsafeFromLazy xs = do
array <- newArray (cast n)
let pairs = zip xs (allFins n)
forM_ (insertPair array) pairs
pure $ MkIArray _ 0 array
Typical drop and take operations, implemented in constant time
-- TODO: Minimize whatever compiler bug is requiring us to use this
newOffset : Nat -> Int -> Int
newOffset k i = cast k + i
export
drop : (n : Nat) -> IArray (n + m) e -> IArray m e
drop n xs@(MkIArray _ offset inner) =
believe_me $ MkIArray (length xs `minus` n) (newOffset n offset) inner
export
drop' : (n : Nat) -> IArray l e -> IArray (minus l n) e
drop' n (MkIArray l offset inner) =
if n >= l
then believe_me $ fromVect [] {e}
else MkIArray (l `minus` n) (newOffset n offset) inner
export
take : (n : Nat) -> IArray (n + m) e -> IArray n e
take n (MkIArray _ offset inner) = MkIArray n offset inner
A view for pattern matching on IArray
s as if they were lists
namespace View
public export
data AsList : IArray n e -> Type where
Nil : {0 tail : IArray 0 e} -> AsList tail
(::) : {0 xs : IArray (S n) e} -> {rest : IArray n e}
-> (head : e) -> (tail : Lazy (AsList rest)) -> AsList xs
public export
asList : (xs : IArray n e) -> AsList xs
asList (MkIArray 0 offset inner) = []
asList xs@(MkIArray (S k) offset inner) =
let head = 0 `index` xs
rest = drop 1 xs
in head :: (asList rest)
Convert to a LazyList
using our view
export
toLazy : IArray n e -> LazyList e
toLazy xs with (asList xs)
toLazy xs | [] = []
toLazy xs | (head :: tail) =
head :: toLazy _ | tail
Typical filtering and finding interface
-- Don't know if this one is really worth optimizing
export
filter : (e -> Bool) -> IArray n e -> (p : Nat ** IArray p e)
filter f xs =
let (_ ** ps) = filter f (toVect xs)
in (_ ** fromVect ps)
export
find : (e -> Bool) -> IArray n e -> Maybe e
find f xs with (asList xs)
find f xs | [] = Nothing
find f xs | (head :: tail) =
if f head
then Just head
else find f _ | tail
export
findIndex : (e -> Bool) -> IArray n e -> Maybe (Fin n)
findIndex f xs =
let indexed = zip (toLazy xs) (allFins (length xs))
in findIndex' f indexed
where
findIndex' : (e -> Bool) -> LazyList (e, Fin (length xs)) -> Maybe (Fin n)
findIndex' f [] = Nothing
findIndex' f ((e, idx) :: ys) =
if f e
then rewrite sym $ lengthCorrect xs in Just idx
else findIndex' f ys
export
findIndices : (e -> Bool) -> IArray n e -> List (Fin n)
findIndices f xs@(MkIArray n _ _) =
let pairs = zip (toLazy xs) (allFins n)
in toList . map snd . filter (f . fst) $ pairs
Interface Implementations
Provide a Foldable
implementation by performing indexing within the context of
unsafePerformIO
.
export
Foldable (IArray n) where
foldl f acc_val (MkIArray n offset inner) = unsafePerformIO $ foldl'
where
liftF : Fin n -> acc -> IO acc
liftF idx acc_val = do
let idx : Int = cast . finToInteger $ idx
val <- map unwrapUnsafe $ readArray inner (idx + offset)
pure $ f acc_val val
foldl' : IO acc
foldl' = iterM liftF id acc_val (Array.allFins n)
foldr f acc_val (MkIArray n offset inner) = unsafePerformIO $ foldr'
where
liftF : Fin n -> acc -> IO acc
liftF idx acc_val = do
let idx : Int = cast . finToInteger $ idx
val <- map unwrapUnsafe $ readArray inner (idx + offset)
pure $ f val acc_val
foldr' : IO acc
-- TODO: Make a lazy reverse allFins function so we aren't instantiating
-- the full index list here
foldr' = iterM liftF id acc_val (reverse $ List.allFins n)
toList = arrayToList
Provide implementations of Eq
, Ord
, and DecEq
in terms of our AsList
view. The DecEq
implementation requires use of unsafe believe_me
, as arrays
are primitive types the compiler has no insight into the structure of.
export
Eq e => Eq (IArray n e) where
xs == ys = (toLazy xs) == (toLazy ys)
export
Ord e => Ord (IArray n e) where
compare xs ys = compare (toLazy xs) (toLazy ys)
-- TODO : Optimize IArray DecEq
decEq' : DecEq e => {m : Nat} -> (xs, ys : IArray m e) -> Dec (xs = ys)
decEq' xs ys with (asList xs, asList ys)
decEq' xs ys | ([], []) = believe_me $ the (xs = xs) Refl
decEq' xs ys | ((h_x :: t_x), (h_y :: t_y)) =
case decEq h_x h_y of
Yes prf =>
believe_me $ decEq' _ _ | (t_x, t_y)
No contra => believe_me contra
export
DecEq e => DecEq (IArray n e) where
decEq xs@(MkIArray n _ _) ys = decEq' xs ys
Provide Functor
, Applicative
, Monad
, and Traverseable
implementations by
building a new array in an IO
context
map' : (f : a -> b) -> IArray n a -> IO (IArray n b)
map' f xs@(MkIArray n _ _) = do
array <- newArray (cast n)
let pairs = zip (map f . toLazy $ xs) (Array.allFins n)
forM_ (insertPair array) pairs
pure $ MkIArray n 0 array
export
Functor (IArray k) where
map f xs = unsafePerformIO $ map' f xs
apply' : {n : Nat} -> IArray n (a -> b) -> IArray n a -> IO (IArray n b)
apply' fs xs = do
array <- newArray (cast n)
let applied = map (\(f, x) => f x) $ zip (toLazy fs) (toLazy xs)
let pairs = zip applied (allFins n)
forM_ (insertPair array) pairs
pure $ MkIArray n 0 array
-- Applicative requires the length of the array to be available at runtime at
-- the type level for `replicate`
export
{k : Nat} -> Applicative (IArray k) where
pure = replicate _
fs <*> xs = unsafePerformIO $ apply' fs xs
-- Like `Vect`'s `join`, this takes the diagonal elements
export
{k : Nat} -> Monad (IArray k) where
join xs =
let lazys = join' . map toLazy $ toLazy xs
in assert_total $ unsafePerformIO $ unsafeFromLazy lazys
where
covering
join' : LazyList (LazyList a) -> LazyList a
join' [] = []
join' ([] :: y) = []
join' ((x :: _) :: xs) =
x :: join' (map (drop 1) xs)
-- join xs = fromVect . join . map toVect . toVect $ xs
-- TODO: Maybe take another pass at optimizing this?
export
Traversable (IArray k) where
traverse fun xs@(MkIArray k _ _) =
map (unsafePerformIO . unsafeFromLazy . fromList)
. Prelude.traverse fun
. toList
$ xs
Provide a Show
implementation in terms of our AsList
view through the
asLazy
wrapper.
export
Show e => Show (IArray n e) where
show xs =
let elements = map show $ toLazy xs
in case elements of
[] => "[]"
(x :: xs) => "[\{join' xs x}]"
where
join' : LazyList String -> (acc : String) -> String
join' [] acc = acc
join' (x :: []) acc = "\{acc}, \{x}"
join' (x :: (y :: xs)) acc =
join' (y :: xs) "\{acc}, \{x}"
Provide a Zippable
implementation by roundtripping through a Vect
. This
isn't the most efficient, but this provides a workable starter implementation.
export
Zippable (IArray k) where
zipWith f xs@(MkIArray k _ _) ys =
unsafePerformIO . unsafeFromLazy $ zipWith f (toLazy xs) (toLazy ys)
zipWith3 f xs@(MkIArray k _ _) ys zs =
unsafePerformIO . unsafeFromLazy $ zipWith3 f (toLazy xs) (toLazy ys) (toLazy zs)
unzipWith f xs@(MkIArray k _ _) =
let (xs, ys) = unzipWith f (toLazy xs)
in (unsafePerformIO . unsafeFromLazy $ xs,
unsafePerformIO . unsafeFromLazy $ ys)
unzipWith3 f xs@(MkIArray k _ _) =
let (xs, ys, zs) = unzipWith3 f (toLazy xs)
in (unsafePerformIO . unsafeFromLazy $ xs,
unsafePerformIO . unsafeFromLazy $ ys,
unsafePerformIO . unsafeFromLazy $ zs)
Provide Semigroup
and Monoid
implementations, matching those for Vect
, in
terms of our other interfaces.
export
Semigroup a => Semigroup (IArray k a) where
xs <+> ys = zipWith (<+>) xs ys
-- Monoid requires the length argument to be available at runtime for
-- `replicate` to work
export
{k : Nat} -> Monoid a => Monoid (IArray k a) where
neutral = replicate _ neutral
Unit tests
IArray
-- @@test IArray RoundTrip
iRoundTrip : IO Bool
iRoundTrip = do
let test_vect : Vect _ Nat = [1, 2, 3, 4, 5]
putStrLn "test_vect: \{show test_vect}"
let xs = fromVect test_vect
putStrLn "xs: \{show xs}"
let round_tripped = toVect xs
putStrLn "round_tripped: \{show round_tripped}"
pure $ test_vect == round_tripped
-- @@test IArray drop/take
iDropTake : IO Bool
iDropTake = do
let test = fromVect [1, 2, 3, 4, 5]
putStrLn "test: \{show test}"
let dropped = drop 2 test
putStrLn "dropped: \{show dropped}"
let taked = take 3 test
putStrLn "taked: \{show taked}"
pure $ (toVect dropped) == [3, 4, 5] && (toVect taked) == [1, 2, 3]
-- @@test IArray toLazy
iLazy : IO Bool
iLazy = do
let test_vect : Vect _ Nat = [1, 2, 3, 4, 5]
let test_array = fromVect test_vect
putStrLn "test_array: \{show test_array}"
let output_lazy = toLazy test_array
putStrLn "output_lazy: \{show output_lazy}"
pure $ vectLazy test_vect == output_lazy
-- @@test IArray foldl/foldr
iFold : IO Bool
iFold = do
let test = fromVect [1, 2, 3, 4, 5]
putStrLn "test: \{show test}"
let foldl_test = foldl (flip (::)) [] test
putStrLn "foldl_test: \{show foldl_test}"
let foldr_test = foldr (::) [] test
putStrLn "foldr_test: \{show foldr_test}"
pure $ foldl_test == [5, 4, 3, 2, 1] && foldr_test == [1, 2, 3, 4, 5]
-- @@test IArray functor
iFunctor : IO Bool
iFunctor = do
let test = fromVect [1, 2, 3, 4, 5]
putStrLn "test: \{show test}"
let output = map (+ 1) test
putStrLn "output: \{show output}"
pure $ toVect output == [2, 3, 4, 5, 6]
-- @@test IArray show
iShow : IO Bool
iShow = do
let test = fromVect [1, 2, 3, 4, 5]
putStrLn "test: \{show test}"
let empty : IArray _ Nat = fromVect []
putStrLn "empty: \{show empty}"
let singleton = fromVect [1]
putStrLn "singleton: \{show singleton}"
pure $
show test == "[1, 2, 3, 4, 5]"
&& show empty == "[]"
&& show singleton == "[1]"
-- @@test IArray monad
iMonad : IO Bool
iMonad = do
let test_vect : Vect 3 (Vect 3 Nat) = [[1, 2, 3], [4, 5, 6], [7, 8 ,9]]
let test_array = fromVect $ map fromVect test_vect
let vect_out = join test_vect
let array_out = join test_array
putStrLn "vect: \{show vect_out} array: \{show array_out}"
pure $ toVect array_out == vect_out
Parsing Utilties
module Parser
import public Parser.Interface as Parser
import public Parser.ParserState as Parser
The interface of a Parser
module Parser.Interface
import public Data.List1
import public Parser.ParserState
import public Control.Eff
export infixr 4 >|
export infixr 5 >&
Parser Errors
Combine the parser state at time of error with an error message.
public export
data ParseError : Type where
-- TODO: Rename this constructor
MkParseError : (state : ParserInternal Id) -> (message : String) -> ParseError
BeforeParse : (message : String) -> ParseError
NestedErrors : (state : ParserInternal Id) -> (message : String)
-> (rest : List ParseError) -> ParseError
Type Alias
public export
Parser : Type -> Type
Parser a = Eff [ParserState, Except ParseError] a
Error Generation
Provide a few effectful actions to generate an error from an error message, and either return it or throw it.
export
parseError : Has ParserState fs => (message : String) -> Eff fs ParseError
parseError message = do
state <- save
pure $ MkParseError state message
export
throwParseError : Has ParserState fs => Has (Except ParseError) fs =>
(message : String) -> Eff fs a
throwParseError message = do
err <- parseError message
throw err
export
guardMaybe : Has ParserState fs => Has (Except ParseError) fs =>
(message : String) -> Eff fs (Maybe a) -> Eff fs a
guardMaybe message x = do
Just x <- x
| _ => throwParseError message
pure x
export
replaceError : (message : String) -> Parser (a -> Parser b)
replaceError message = do
state <- save
pure (\_ => throw $ MkParseError state message)
Running a parser
We will use the phrasing "rundown" to refer to running all the effects in the
parser effect stack except ParserState
, which is left in the effect stack to
facilitate handling in the context of another monad or effect stack, since it
benefits from mutability.
Rundown a parser, accepting the first returning parse, which may be failing or
succeding, and automatically generating a "no valid parses" message in the event
no paths in the Choice
effect produce a returning parse.
export
rundownFirst : (f : Parser a) -> Eff [ParserState] (Either ParseError a)
rundownFirst f =
runExcept f
Provide wrappers for rundownFirst
for evaluating it in various contexts.
export
runFirstIO : HasIO io => MonadRec io =>
(f : Parser a) -> String -> io (Either ParseError a)
runFirstIO f str = do
Just state <- newInternalIO str
| _ => pure . Left $ BeforeParse "Empty input"
runEff (rundownFirst f) [handleParserStateIO state]
export
runFirstIODebug : (f : Parser a) -> String -> IO (Either ParseError a)
runFirstIODebug f str = do
Just state <- newInternalIO str
| _ => pure . Left $ BeforeParse "Empty input"
runEff (rundownFirst f) [handleParserStateIODebug state]
export
runFirst : (f : Parser a) -> String -> Eff fs (Either ParseError a)
runFirst f str = do
Just state <- pure $ newInternal str
| _ => pure . Left $ BeforeParse "Empty input"
(result, _) <- lift . runParserState state . rundownFirst $ f
pure result
export
runFirst' : (f : Parser a) -> String -> Either ParseError a
runFirst' f str = extract $ runFirst f str {fs = []}
Utility functionality
Parser combinators
Try to run a computation in the context of the Parser
effect stack, if it
fails (via Except
), reset the state and resort to the supplied callback
Also supply a version specialized to ignore the error value, returning Just a
if the parse succeeds, and Nothing
if it fails.
export
try : (f : Parser a) -> (err : ParseError -> Parser a) -> Parser a
try f err = do
starting_state <- save
result <- lift . runExcept $ f
case result of
Left error => do
load starting_state
err error
Right result => pure result
export
tryMaybe : (f : Parser a) -> Parser (Maybe a)
tryMaybe f = try (map Just f) (\_ => pure Nothing)
export
tryEither : (f : Parser a) -> Parser (Either ParseError a)
tryEither f = try (map Right f) (pure . Left)
Attempt to parse one of the given input parsers, in the provided order, invoking the provided error action on failure.
The state will not be modified when an input parser fails
export
oneOfE : (err : String) -> List (Parser a) -> Parser a
oneOfE err xs = do
start <- save
oneOfE' err start [] xs
where
oneOfE' : (err : String) -> (start : ParserInternal Id)
-> (errs : List ParseError) -> List (Parser a) -> Parser a
oneOfE' err start errs [] = do
throw $ NestedErrors start err (reverse errs)
oneOfE' err start errs (x :: xs) = do
x <- tryEither x
case x of
Right val => pure val
Left error => oneOfE' err start (error :: errs) xs
Attempt to parse 0+ of an item
export
many : (f : Parser a) -> Parser (List a)
many f = do
Just next <- tryMaybe f
| _ => pure []
map (next ::) $ many f
Attempt to parse 1+ of an item, invoking the supplied error action on failure
export
atLeastOne : (err : ParseError -> Parser (List1 a)) -> (f : Parser a)
-> Parser (List1 a)
atLeastOne err f = do
Right next <- tryEither f
| Left e => err e
map (next :::) $ many f
Lift a parser producing a List
or List1
of Char
into a parser producing a
String
-- TODO: Rename these
export
liftString : Parser (List Char) -> Parser String
liftString x = do
xs <- x
pure $ pack xs
export
liftString' : Parser (List1 Char) -> Parser String
liftString' x = liftString $ map forget x
Attempt to parse a specified character
export
charExact : Char -> Parser Char
charExact c = do
result <- charExact' c
case result of
GotChar char => pure char
GotError err => throwParseError "Got \{show err} Expected \{show c}"
EndOfInput => throwParseError "End of input"
Attempt to parse one of a list of chars
export
theseChars : List Char -> Parser Char
theseChars cs = do
pnote "Parsing one of: \{show cs}"
result <- charPredicate (\x => any (== x) cs)
case result of
GotChar char => pure char
GotError err => throwParseError "Got \{show err} Expected one of \{show cs}"
EndOfInput => throwParseError "End of input"
Attempt to parse an exact string
export
exactString : String -> Parser String
exactString str with (asList str)
exactString "" | [] = do
pnote "Parsing the empty string"
pure ""
exactString input@(strCons c str) | (c :: x) = do
pnote "Parsing exact string \{show input}"
GotChar next <- charPredicate (== c)
| GotError err => throwParseError "Got \{show err} expected \{show c}"
| EndOfInput => throwParseError "End of input"
rest <- exactString str | x
pure input
Wrap a parser in delimiter characters, discarding the value of the delimiters
export
delimited : (before, after : Char) -> Parser a -> Parser a
delimited before after x = do
pnote "Parsing delimited by \{show before} \{show after}"
starting_state <- save
_ <- charExact before
Right val <- tryEither x
| Left err => do
load starting_state
throw err
Just _ <- tryMaybe $ charExact after
| _ => do
load starting_state
throw $ MkParseError starting_state "Unmatched delimiter \{show before}"
pure val
Consume any number of characters of the provided character class and discard the result. Also a version for doing so on both sides of a provided parser
export
nom : Parser Char -> Parser ()
nom x = do
pnote "Nomming"
_ <- many x
pure ()
export
surround : (around : Parser Char) -> (item : Parser a) -> Parser a
surround around item = do
pnote "Surrounding"
nom around
val <- item
nom around
pure val
Composition of boolean functions
||| Return true if both of the predicates evaluate to true
public export
(>&) : (a : e -> Bool) -> (b : e -> Bool) -> (e -> Bool)
(>&) a b x = a x && b x
||| Return true if either of the predicates evaulates to true
public export
(>|) : (a : e -> Bool) -> (b : e -> Bool) -> (e -> Bool)
(>|) a b x = a x || b x
Parser State
An effectful description of the text a parser consumes
module Parser.ParserState
import public Data.String
import public Data.DPair
import public Data.Refined
import public Data.Refined.Int64
import public Data.SortedMap
import public Data.IORef
import Data.Primitives.Interpolation
import System.File
import public Control.Eff
Barbie Basics
Barbies are types that can "change their clothes", in Idris, this manifests as a type indexed by a type-level function that affects the types of the fields.
Since we know our usage here is going to be quite simple, and not even really
making use of dependently typed fun, we are going to implement all the barbie
functionality we need by hand, but if you feel like barbies might be a good fit
for your problem, or you simply want to learn more, please check out a library
like barbies
1
public export
Id : Type -> Type
Id x = x
Internal State of a Parser
Type alias for our refined Int64
s
public export
0 IsIndex : (length : Int64) -> Int64 -> Type
IsIndex length = From 0 && LessThan length
public export
record Index (length : Int64) where
constructor MkIndex
index : Int64
{auto 0 prf : IsIndex length index}
Stores the location we are currently at in the string, and metadata about it for providing good error messages. Parsing an empty input isn't very interesting, so we exclude inputs of length zero, since that will make other things easier.
||| State representing a parser's position in the text
public export
record ParserInternal (f : Type -> Type) where
constructor MkInternal
-- IDEA: Maybe go full barbie and have this be a field, so that we can, say,
-- read directly from a file instead of from an already loaded string using the
-- same parser
||| The input string
input : String
||| The length of the input string
length : Int64
{auto 0 len_prf : length = cast (strLength input)}
||| A sorted set containing the positions of the start of each line
line_starts : SortedMap (Index length) Nat
||| The position of the next character to read in the input
position : f (Index length)
||| True if we have hit the end of input
end_of_input : f Bool
%name ParserInternal pi, pj, pk
ParserInternal Methods
Construct a ParserInternal
from an input string. Will fail if the input is
empty, because then we can't index it.
export
newInternal : (input : String) -> Maybe (ParserInternal Id)
newInternal input =
-- Check if we have at least one character in the input
case refine0 0 {p = IsIndex (cast (strLength input))} of
Nothing => Nothing
Just (Element position _) => Just $
MkInternal input
(cast (strLength input))
(mkStarts' input (MkIndex position))
(MkIndex position)
False
where
partial
mkStarts :
(str : String) -> (acc : List (Index (cast (strLength str)), Nat))
-> (idx : Index (cast (strLength str))) -> (count : Nat) -> (next : Bool)
-> List (Index (cast (strLength str)), Nat)
mkStarts str acc idx count True =
mkStarts str ((idx, count) :: acc) idx (S count) False
mkStarts str acc idx count False =
case refine0 (idx.index + 1) {p = IsIndex (cast (strLength str))} of
Nothing => acc
Just (Element next _) =>
if strIndex str (cast idx.index) == '\n'
then mkStarts str acc (MkIndex next) count True
else mkStarts str acc (MkIndex next) count False
mkStarts' : (str : String) -> (start : Index (cast (strLength str)))
-> SortedMap (Index (cast (strLength str))) Nat
mkStarts' str start =
let
pairs = assert_total $
mkStarts str [] start 0 True
in fromList pairs
Get the current line and column number
||| Returns the current position of the parser cursor in, zero indexed, (line,
||| column) form
export
positionPair : (pi : ParserInternal Id) -> (Nat, Nat)
positionPair pi =
case lookup pi.position pi.line_starts of
Just line => (line, 0)
Nothing =>
case lookupBetween pi.position pi.line_starts of
-- There will always be at least one line start, and we would have hit
-- the previous case if we were at the start of the first one, so if
-- there isn't a before, we can return a nonsense value safely
(Nothing, _) => (0, 0)
(Just (start, linum), after) =>
-- Our index will always be after the start of the line, for previously
-- mentioned reasons, so this cast is safe
let col = cast {to = Nat} $ pi.position.index - start.index
in (linum, col)
More Barbie Functionality
Provide the barbie analogs of map
and traverse
for our ParserInternal
type, allowing us to change the type the values in a ParserInternal
by mapping
over those values.
export
bmap : ({0 a : Type} -> f a -> g a) -> ParserInternal f -> ParserInternal g
-- bmap f = bmap_ (\_ => f)
bmap fun (MkInternal input length line_starts position end_of_input) =
let position' = fun position
end_of_input' = fun end_of_input
in MkInternal input length line_starts position' end_of_input'
export
btraverse : Applicative e => ({0 a : Type} -> f a -> e (g a))
-> ParserInternal f -> e (ParserInternal g)
btraverse fun (MkInternal input length line_starts position end_of_input) =
let pures = (MkInternal input length line_starts)
in [| pures (fun position) (fun end_of_input)|]
Three way result
||| Three way result returned from attempting to parse a single char
public export
data ParseCharResult : Type where
GotChar : (char : Char) -> ParseCharResult
GotError : (err : Char) -> ParseCharResult
EndOfInput : ParseCharResult
The Effect Type
export
data ParserState : Type -> Type where
Save : ParserState (ParserInternal Id)
Load : (ParserInternal Id) -> ParserState ()
-- TODO: Maybe add a ParseString that parses a string of characters as a
-- string using efficent slicing?
ParseChar : (predicate : Char -> Bool) -> ParserState ParseCharResult
ParseExactChar : (char : Char) -> ParserState ParseCharResult
ParseEoF : ParserState Bool
Note : Lazy String -> ParserState ()
Actions
||| Return the current state, for potential later reloading
export
save : Has ParserState fs => Eff fs (ParserInternal Id)
save = send Save
||| Reset to the provided state
export
load : Has ParserState fs => ParserInternal Id -> Eff fs ()
load pi = send $ Load pi
||| Attempt to parse a char, checking to see if it complies with the supplied
||| predicate, updates the state if parsing succeeds, does not alter it in an
||| error condition.
export
charPredicate : Has ParserState fs => (predicate : Char -> Bool)
-> Eff fs ParseCharResult
charPredicate predicate = send $ ParseChar predicate
||| Parse a char by exact value
export
charExact' : Has ParserState fs => (chr : Char) -> Eff fs ParseCharResult
charExact' chr = send $ ParseExactChar chr
||| "Parse" the end of input, returning `True` if the parser state is currently
||| at the end of the input
export
parseEoF : Has ParserState fs => Eff fs Bool
parseEoF = send ParseEoF
||| Make a note to be output when running with a debug handler
export
pnote : Has ParserState fs => Lazy String -> Eff fs ()
pnote x = send $ Note x
Handling a ParserState
IO Context
export
handleParserStateIO : HasIO io =>
IORef (ParserInternal IORef) -> ParserState t -> io t
handleParserStateIO pi Save = do
pi <- readIORef pi
btraverse readIORef pi
handleParserStateIO pi (Load pj) = do
pj <- btraverse newIORef pj
writeIORef pi pj
handleParserStateIO pi (ParseChar predicate) = do
pi <- readIORef pi
False <- readIORef pi.end_of_input
| _ => pure EndOfInput
position <- readIORef pi.position
let char = assert_total $ strIndex pi.input (cast position.index)
True <- pure $ predicate char
| _ => pure $ GotError char
-- Our refinement type on the position forces us to check that the length is
-- in bounds after incrementing it, if its out of bounds, set the end_of_input
-- flag
case refine0 (position.index + 1) {p = IsIndex pi.length} of
Nothing => do
writeIORef pi.end_of_input True
pure $ GotChar char
Just (Element next _) => do
writeIORef pi.position $ MkIndex next
pure $ GotChar char
handleParserStateIO pi (ParseExactChar chr) = do
-- TODO: do this directly?
handleParserStateIO pi (ParseChar (== chr))
handleParserStateIO pi ParseEoF = do
pi <- readIORef pi
readIORef pi.end_of_input
-- We ignore notes in non-debug mode
handleParserStateIO pi (Note _) = pure ()
export
newInternalIO : HasIO io => String -> io $ Maybe (IORef (ParserInternal IORef))
newInternalIO str = do
Just internal <- pure $ newInternal str
| _ => pure Nothing
internal <- btraverse newIORef internal
map Just $ newIORef internal
Wrapper with debugging output
export
handleParserStateIODebug : HasIO io =>
IORef (ParserInternal IORef) -> ParserState t -> io t
handleParserStateIODebug x (Note note) = do
state <- readIORef x
state <- btraverse readIORef state
_ <- fPutStrLn stderr "Note \{note} -> \{show state}"
pure ()
handleParserStateIODebug x y = do
state <- readIORef x
state <- btraverse readIORef state
_ <- fPutStrLn stderr "\{show y} -> \{show state}"
handleParserStateIO x y
State context
unPS : ParserInternal Id -> ParserState a -> (a, ParserInternal Id)
unPS pi Save = (pi, pi)
unPS pi (Load pj) = ((), pi)
unPS pi (ParseChar predicate) =
if pi.end_of_input
then (EndOfInput, pi)
else let
char = assert_total $ strIndex pi.input (cast pi.position.index)
in if predicate char
then case refine0 (pi.position.index + 1) {p = IsIndex pi.length} of
Nothing =>
(GotChar char, {end_of_input := True} pi)
Just (Element next _) =>
(GotChar char, {position := MkIndex next} pi)
else (GotError char, pi)
unPS pi (ParseExactChar chr) = unPS pi (ParseChar (== chr))
unPS pi ParseEoF = (pi.end_of_input, pi)
unPS pi (Note _) = ((), pi)
export
runParserState : Has ParserState fs =>
(s : ParserInternal Id) -> Eff fs t
-> Eff (fs - ParserState) (t, ParserInternal Id)
runParserState s =
handleRelayS s (\x, y => pure (y, x)) $ \s2,ps,f =>
let (a, st) = unPS s2 ps
in f st a
Footnotes
Numerical Parsers
module Parser.Numbers
import public Parser
import Data.Vect
import Control.Eff
Base Abstraction
public export
record Base where
constructor MkBase
base : Nat
digits : Vect base Char
%name Base b
export
hasDigit : Base -> Char -> Bool
hasDigit (MkBase base digits) c = any (== c) digits
export
digitValue : Base -> Char -> Maybe Nat
digitValue (MkBase base digits) c = digitValue' digits 0
where
digitValue' : Vect n Char -> (idx : Nat) -> Maybe Nat
digitValue' [] idx = Nothing
digitValue' (x :: xs) idx =
if x == c
then Just idx
else digitValue' xs (S idx)
public export
base10 : Base
base10 = MkBase 10
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
public export
hex : Base
hex = MkBase 16
['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f']
Parsers
Nat
export
nat : Base -> Parser Nat
nat b = do
error <- replaceError "Expected digit"
(first ::: rest) <- atLeastOne error parseDigit
pure $ foldl (\acc, e => 10 * acc + e) first rest
where
parseDigit : Parser Nat
parseDigit = do
GotChar char <- charPredicate (hasDigit b)
| GotError e => throwParseError "\{show e} is not a digit"
| EndOfInput => throwParseError "End Of Input"
case digitValue b char of
Nothing =>
throwParseError "Failed to parse as base \{show b.base}: \{show char}"
Just x => pure x
export
natBase10 : Parser Nat
natBase10 = nat base10
Integer
export
integer : Base -> Parser Integer
integer b = do
negative <- map isJust . tryMaybe $ charExact '-'
value <- map natToInteger $ nat b
if negative
then pure $ negate value
else pure $ value
export
integerBase10 : Parser Integer
integerBase10 = integer base10
Double
-- TODO: Replicate `parseDouble` logic and make this base-generic
export
double : Parser Double
double = do
starting_state <- save
integer <- integer
fraction <- tryMaybe fraction
exponent <- tryMaybe exponent
let str = case (fraction, exponent) of
(Nothing, Nothing) =>
integer
(Nothing, (Just exponent)) =>
"\{integer}e\{exponent}"
((Just fraction), Nothing) =>
"\{integer}.\{fraction}"
((Just fraction), (Just exponent)) =>
"\{integer}.\{fraction}e\{exponent}"
Just out <- pure $ parseDouble str
| _ =>
throw $ MkParseError starting_state "Std failed to parse as double: \{str}"
pure out
where
parseDigit : Parser Char
parseDigit = do
GotChar char <- charPredicate (hasDigit base10)
| GotError e => throwParseError "\{show e} is not a digit"
| EndOfInput => throwParseError "End Of Input"
pure char
integer : Parser String
integer = do
sign <- tryMaybe $ charExact '-'
error <- replaceError "Expected digit"
digits <- map forget $ atLeastOne error parseDigit
case sign of
Nothing => pure $ pack digits
Just x => pure $ pack (x :: digits)
fraction : Parser String
fraction = do
decimal <- charExact '.'
error <- replaceError "Expected digit"
digits <- map forget $ atLeastOne error parseDigit
pure $ pack digits
exponent : Parser String
exponent = do
e <- theseChars ['e', 'E']
sign <- theseChars ['+', '-']
error <- replaceError "Expected digit"
digits <- map forget $ atLeastOne error parseDigit
pure . pack $ sign :: digits
Unit tests
Test roundtripping a value through the provided parser
roundtrip : Eq a => Show a => a -> (p : Parser a) -> IO Bool
roundtrip x p = do
let string = show x
putStrLn "Roundtripping \{string}"
Just state <- newInternalIO string
| _ => do
putStrLn "Failed to produce parser for \{string}"
pure False
Right result <- runEff (rundownFirst p) [handleParserStateIO state] {m = IO}
| Left err => do
printLn err
pure False
putStrLn "Input: \{string} Output: \{show result}"
pure $ x == result
Do some roundtrip tests with the nat parser
-- @@test Nat round trip
natRoundTrip : IO Bool
natRoundTrip = pure $
!(roundtrip 0 natBase10)
&& !(roundtrip 1 natBase10)
&& !(roundtrip 100 natBase10)
&& !(roundtrip 1234 natBase10)
&& !(roundtrip 1234567890 natBase10)
&& !(roundtrip 1234567890000 natBase10)
&& !(roundtrip 12345678901234567890 natBase10)
-- @@test Integer round trip
integerRoundTrip : IO Bool
integerRoundTrip = pure $
!(roundtrip 0 integerBase10)
&& !(roundtrip 1 integerBase10)
&& !(roundtrip 100 integerBase10)
&& !(roundtrip 1234 integerBase10)
&& !(roundtrip 1234567890 integerBase10)
&& !(roundtrip 1234567890000 integerBase10)
&& !(roundtrip 12345678901234567890 integerBase10)
&& !(roundtrip (-1) integerBase10)
&& !(roundtrip (-100) integerBase10)
&& !(roundtrip (-1234) integerBase10)
&& !(roundtrip (-1234567890) integerBase10)
&& !(roundtrip (-1234567890000) integerBase10)
&& !(roundtrip (-12345678901234567890) integerBase10)
Compare our parsing of a double to the standard library's
compareDouble : String -> IO Bool
compareDouble string = do
Just state <- newInternalIO string
| _ => do
putStrLn "Failed to produce parser for \{string}"
pure False
Right result <-
runEff (rundownFirst double) [handleParserStateIO state] {m = IO}
| Left err => do
printLn err
pure False
putStrLn "Input: \{string} Output: \{show result}"
Just double' <- pure $ parseDouble string
| _ => do
printLn "Std failed to parse as double: \{string}"
pure False
pure $ result == double'
-- @@test Double Std Comparison
doubleRoundTrip : IO Bool
doubleRoundTrip = pure $
!(compareDouble "0")
&& !(compareDouble "1")
&& !(compareDouble "100")
&& !(compareDouble "1234")
&& !(compareDouble "1234567890")
&& !(compareDouble "1234567890000")
&& !(compareDouble "12345678901234567890")
&& !(compareDouble "-1")
&& !(compareDouble "-100")
&& !(compareDouble "-1234")
&& !(compareDouble "-1234567890")
&& !(compareDouble "-1234567890000")
&& !(compareDouble "-12345678901234567890")
&& !(compareDouble "0.0")
&& !(compareDouble "1.0")
&& !(compareDouble "-1.0")
&& !(compareDouble "-0.0")
&& !(compareDouble "-0.0")
&& !(compareDouble "0.1234")
&& !(compareDouble "0.01234")
&& !(compareDouble "-0.1234")
&& !(compareDouble "-0.01234")
&& !(compareDouble "1.234e+5")
&& !(compareDouble "1.234e-5")
&& !(compareDouble "-1.234e+5")
&& !(compareDouble "-1.234e-5")
JSON Parser
module Parser.JSON
import public Parser
import public Parser.Numbers
import Control.Monad.Eval
import Structures.Dependent.DList
JSON components
Types a JSON value is allowed to have
public export
data JSONType : Type where
TObject : JSONType
TArray : JSONType
TString : JSONType
TNumber : JSONType
TBool : JSONType
TNull : JSONType
%runElab derive "JSONType" [Generic, Meta, Eq, Ord, Show, DecEq]
%name JSONType type, type2, type3
A JSON value indexed by its type
public export
data JSONValue : JSONType -> Type where
VObject : {types : List JSONType}
-> DList JSONType (\t => (String, JSONValue t)) types -> JSONValue TObject
VArray : {types : List JSONType}
-> DList JSONType JSONValue types -> JSONValue TArray
VString : (s : String) -> JSONValue TString
VNumber : (d : Double) -> JSONValue TNumber
VBool : (b : Bool) -> JSONValue TBool
VNull : JSONValue TNull
%name JSONValue value, value2, value3
JSON Functions
foldl
Analog for consuming a JSON structure by values. Ignores the keys in
objects.
export
dFoldL : {t : JSONType}
-> (acc -> (type : JSONType) -> (val : JSONValue type) -> acc)
-> acc -> JSONValue t -> acc
dFoldL f acc' (VObject xs) =
let recur : acc -> (v : JSONType) -> (String, JSONValue v) -> acc
recur acc' v (key, value) = dFoldL f acc' value
in DList.dFoldL recur acc' xs
dFoldL f acc' (VArray xs) =
let recur : acc -> (v : JSONType) -> JSONValue v -> acc
recur acc' v value = dFoldL f acc' value
in DList.dFoldL recur acc' xs
dFoldL f acc (VString s) = f acc _ (VString s)
dFoldL f acc (VNumber d) = f acc _ (VNumber d)
dFoldL f acc (VBool b) = f acc _ (VBool b)
dFoldL f acc VNull = f acc _ VNull
Look up a property in an object
export
getProperty : (prop : String) -> (object : JSONValue TObject)
-> Maybe (type : JSONType ** JSONValue type)
getProperty prop (VObject xs) =
case dFind (\_, (key, _) => key == prop) xs of
Nothing => Nothing
Just (type ** (_, val)) => Just (type ** val)
Return the values from an object.
export
getValues : (object : JSONValue TObject)
-> (types : List JSONType ** DList JSONType JSONValue types)
getValues (VObject xs) =
dMap' (\t, (k, v) => (t ** v)) xs
Recursively apply a filter to a JSON structure.
export
dFilter : (f : (type : JSONType) -> (val : JSONValue type) -> Bool)
-> {t : JSONType} -> JSONValue t -> Maybe (JSONValue t)
dFilter f value = eval $ dFilter' f value
where
dFilter' : (f : (type : JSONType) -> (val : JSONValue type) -> Bool)
-> {t : JSONType} -> JSONValue t -> Eval $ Maybe (JSONValue t)
dFilter' f (VObject xs) = do
True <- pure $ f _ (VObject xs)
| _ => pure Nothing
let xs = toList xs
xs : List (Maybe (x : JSONType ** (String, JSONValue x))) <-
traverse
(\(t ** (k, v)) => do
Just v <- dFilter' f v
| _ => pure Nothing
pure $ Just (t ** (k, v)))
xs
let (_ ** xs) : (t : List JSONType ** DList _ _ t) =
fromList $ catMaybes xs
pure . Just $ VObject xs
dFilter' f (VArray xs) = do
True <- pure $ f _ (VArray xs)
| _ => pure Nothing
let xs = toList xs
xs : List (Maybe (x : JSONType ** JSONValue x)) <-
traverse
(\(t ** v) => do
Just v <- dFilter' f v
| _ => pure Nothing
pure $ Just (t ** v))
xs
let (_ ** xs) : (t : List JSONType ** DList _ _ t) =
fromList $ catMaybes xs
pure . Just $ VArray xs
dFilter' f x =
pure $ case f _ x of
False => Nothing
True => Just x
Parsers
We are going to get mutually recursive here. Instead of using a mutual
block,
we will use the more modern style of declaring all our types ahead of our
definitions.
export
object : Parser (JSONValue TObject)
export
array : Parser (JSONValue TArray)
export
string : Parser (JSONValue TString)
export
number : Parser (JSONValue TNumber)
export
bool : Parser (JSONValue TBool)
export
null : Parser (JSONValue TNull)
Define a whitespace
character class based on the json specifications
whitespace : Parser Char
whitespace = do
pnote "Whitespace character"
theseChars [' ', '\n', '\r', '\t']
Convenience function
dpairize : {t : JSONType} ->
Parser (JSONValue t) -> Parser (t' : JSONType ** JSONValue t')
dpairize x = do
x <- x
pure (_ ** x)
Top level json value parser
export
value : Parser (t : JSONType ** JSONValue t)
value = do
pnote "JSON Value"
surround whitespace $ oneOfE
"Expected JSON Value"
[
dpairize object
, dpairize array
, dpairize string
, dpairize number
, dpairize bool
, dpairize null
]
Now go through our json value types
object = do
pnote "JSON Object"
oneOfE
"Expected Object"
[emptyObject, occupiedObject]
where
emptyObject : Parser (JSONValue TObject)
emptyObject = do
delimited '{' '}' (nom whitespace)
pure $ VObject Nil
keyValue : Parser (t : JSONType ** (String, JSONValue t))
keyValue = do
VString key <- surround whitespace string
_ <- charExact ':'
(typ ** val) <- value
pure (typ ** (key, val))
restKeyValue : Parser (t : JSONType ** (String, JSONValue t))
restKeyValue = do
_ <- charExact ','
keyValue
pairs : Parser (List1 (t : JSONType ** (String, JSONValue t)))
pairs = do
first <- keyValue
rest <- many restKeyValue
-- TODO: headTail combinator for this
pure $ first ::: rest
occupiedObject : Parser (JSONValue TObject)
occupiedObject = do
val <- delimited '{' '}' pairs
let (types ** xs) = DList.fromList (forget val)
pure $ VObject xs
array = do
pnote "JSON Array"
oneOfE
"Expected Array"
[emptyArray, occupiedArray]
where
emptyArray : Parser (JSONValue TArray)
emptyArray = do
delimited '[' ']' (nom whitespace)
pure $ VArray Nil
restValue : Parser (t : JSONType ** JSONValue t)
restValue = do
_ <- charExact ','
value
values : Parser (List1 (t : JSONType ** JSONValue t))
values = do
first <- value
rest <- many restValue
pure $ first ::: rest
occupiedArray : Parser (JSONValue TArray)
occupiedArray = do
xs <- delimited '[' ']' values
let (types ** xs) = DList.fromList (forget xs)
pure $ VArray xs
string = do
pnote "JSON String"
str <- liftString $ delimited '"' '"' (many stringCharacter)
pure $ VString str
where
-- TODO: Handle control characters properly
stringCharacter : Parser Char
stringCharacter = do
result <- charPredicate (not . (== '"'))
case result of
GotChar char => pure char
GotError err =>
throwParseError "Expected string character, got \{show err}"
EndOfInput => throwParseError "Unexpected end of input"
number = do
pnote "JSON Number"
d <- double
pure $ VNumber d
bool = do
pnote "JSON Bool"
oneOfE
"Expected Bool"
[true, false]
where
true : Parser (JSONValue TBool)
true = do
_ <- exactString "true"
pure $ VBool True
false : Parser (JSONValue TBool)
false = do
_ <- exactString "false"
pure $ VBool False
null = do
pnote "JSON null"
_ <- exactString "null"
pure VNull
Unit tests
Quick smoke test
-- @@test JSON Quick Smoke
quickSmoke : IO Bool
quickSmoke = do
let input = "{\"string\":\"string\",\"number\":5,\"true\":true,\"false\":false,\"null\":null,\"array\":[1,2,3]}"
putStrLn input
Right (type ** parsed) <- runFirstIODebug value input
| Left err => do
printLn err
pure False
putStrLn "Input: \{input}\nOutput: \{show type} -> \{show parsed}"
let reference_object =
VObject [
("string", VString "string")
, ("number", VNumber 5.0)
, ("true", VBool True)
, ("false", VBool False)
, ("null", VNull)
, ("array", VArray [
VNumber 1.0
, VNumber 2.0
, VNumber 3.0
])
]
case type of
TObject => pure $ parsed == reference_object
_ => pure False
module Years.Y2015
import Structures.Dependent.FreshList
import Runner
Days
export
y2015 : Year
y2015 = MkYear 2015 [
Day 1
day1
Day 2
, day2
Day 3
, day3
Day 4
, day4
Day 5
, day5
Day 6
, day6
Day 7
, day7
Day 8
, day8
Day 9
, day9
Day 10
, day10
Day 11
, day11
Day 12
, day12
Day 13
, day13
Day 14
, day14
]
Year 2015 Day 1
Pretty simple, basic warmup problem, nothing really novel is on display here except the effectful part computations.
Solver Functions
This one implements the entirety of the logic for part 1, in a simple tail recursive manner, pattern matching on each character in the input string.
We include a case for a non-paren character for totality's sake, but since we kinda trust the input here we just don't do anything in that branch.
trackFloor : (start : Integer) -> (xs : List Char) -> Integer
trackFloor start [] = start
trackFloor start ('(' :: xs) = trackFloor (start + 1) xs
trackFloor start (')' :: xs) = trackFloor (start - 1) xs
trackFloor start (x :: xs) = trackFloor start xs
This one is slightly more complicated, ultimately very similar to the above, but with two accumulators, one for the position in the input string in addition to the one for the current floor.
findBasement : (position : Nat) -> (currentFloor : Integer) -> (xs : List Char)
-> Nat
findBasement position currentFloor [] = position
findBasement position currentFloor ('(' :: xs) =
findBasement (position + 1) (currentFloor + 1) xs
findBasement position currentFloor (')' :: xs) =
if currentFloor <= 0
then position
else findBasement (position + 1) (currentFloor - 1) xs
findBasement position currentFloor (x :: xs) =
findBasement (position + 1) currentFloor xs
Part Functions
Both this parts are simple application of one of our solver functions
Part 1
Very uneventful, the only thing novel here is pulling the input out of the
ReaderL "input" String
, which will become very boring very quickly.
part1 : Eff (PartEff String) (Integer, ())
part1 = do
input <- map unpack $ askAt "input"
let output = trackFloor 0 input
pure (output, ())
Part 2
We have to be careful to start the position accumulator at 1, since the problem specifies that the input string is 1-index.
part2 : () -> Eff (PartEff String) Nat
part2 x = do
input <- map unpack $ askAt "input"
pure $ findBasement 1 0 input
Year 2015 Day 2
This day provides us our first little taste of effectful parsing
import Data.List
import Data.List1
import Data.String
Box structure
A record to hold the parameters of a box and methods to operate on it
record Box where
constructor MkBox
length, width, height : Integer
Box methods
.area
provides the surface area of the box, .slack
provides the surface area
of the smallest face, .ribbon
provides the smallest perimeter of a face, and
.bow
provides the volume of the box.
Names are as described in the problem
(.area) : Box -> Integer
(.area) (MkBox length width height) =
2 * length * width + 2 * width * height + 2 * length * height
(.slack) : Box -> Integer
(.slack) (MkBox length width height) =
foldl1 min [length * width, width * height, length * height]
(.ribbon) : Box -> Integer
(.ribbon) (MkBox length width height) =
foldl1 min [2 * (length + width), 2 * (length + height), 2 * (width + height)]
(.bow) : Box -> Integer
(.bow) (MkBox length width height) = length * width * height
Provide the total amount of ribbon needed, by adding the ribbon (smallest perimeter) and the bow (volume) values together.
totalRibbon : Box -> Integer
totalRibbon x = x.ribbon + x.bow
Provide the total amount of paper needed by adding the surface area and the slack (smallest side surface area) together.
paperNeeded : Box -> Integer
paperNeeded x = x.area + x.slack
Box parsing
Parse a box from an input string of form lengthxwidthxheight
.
parseBox : Has (Except String) fs =>
String -> Eff fs Box
parseBox str = do
First, we split the string into 3 parts by pattern matching on the result of
split
, using pure
to lift the computation into the effect, and throwing an
error if the pattern match fails.
l ::: [w, h] <- pure $ split (== 'x') str
| xs => throw "Box did not have exactly 3 components: \{show xs}"
Then try to parse each of the three integers, throwing an error if parsing fails, then construct and return our box.
length <- note "Failed parsing length: \{show l}" $ parsePositive l
width <- note "Failed parsing width: \{show w}" $ parsePositive w
height <- note "Failed parsing height: \{show h}" $ parsePositive h
pure $ MkBox length width height
Part Functions
Part 1
Split the input into lines, effectfully traverse our box parser over the resulting list of lines, then sum the amounts of paper needed for each box.
We return the list of parsed boxes as the context here to avoid needing to parse again in part 2
part1 : Eff (PartEff String) (Integer, List Box)
part1 = do
input <- map lines $ askAt "input"
boxes <- traverse parseBox input
let output = sum . map paperNeeded $ boxes
pure (output, boxes)
Part 2
Much the same as part 1, except with the amount of ribbon needed instead of the amount of paper, and without the parsing, since we received the already parsed list of boxes from part 1 as our context value.
part2 : (boxes : List Box) -> Eff (PartEff String) Integer
part2 boxes = pure . sum . map totalRibbon $ boxes
Year 2015 Day 3
This day provides a gentle introduction to mutual
blocks and mutually
recursive functions.
import Data.SortedSet
import Data.String
import Util
Parsing and data structures
We'll do parsing a little more properly this time, turning the input into a list of movement commands
data Movement = North | East | South | West
We need an effectful operation to parse a single char into a movement. We'll pattern match on the char, and include a catch-all case that throws an error in the event of an invalid char
parseMovement : Has (Except String) fs => (x : Char) -> Eff fs Movement
parseMovement '^' = pure North
parseMovement '>' = pure East
parseMovement 'v' = pure South
parseMovement '<' = pure West
parseMovement x = throw "Invalid Movement: \{show x}"
We also need to be able to translate a Movement
into a vector of length one
pointing in the given direction in coordinate space. Somewhat arbitrarily, we
chose 'North' to be positive x and 'East' to be positive y.
vector : Movement -> (Integer, Integer)
vector North = (1, 0)
vector East = (0, 1)
vector South = (-1, 0)
vector West = (0, -1)
Solver functions
Visited houses
This is a pretty simple task, we are just applying the movements to our current position, and adding our current position to the set of visited locations, so we'll handle this with a normal tail recursive function.
To keep the api nice, we wont ask for the set or the starting location in the top-level function, and instead have the top level function initialize the set and location before passing control to the inner tail-recursive variant.
Because the starting location gets a present, we'll add our location to the set before performing the movement, so we will need to add our final location to the set in the recursive base case.
visitedLocations : List Movement -> SortedSet (Integer, Integer)
visitedLocations xs = visitor xs empty (0, 0)
where
visitor : (moves : List Movement) -> (set : SortedSet (Integer, Integer))
-> (location : (Integer, Integer)) -> SortedSet (Integer, Integer)
visitor [] set location = insert location set
visitor (x :: xs) set location =
visitor xs (insert location set) (location >+< vector x)
Robo Santa
This one gets a bit more interesting, we'll adopt the same tail recursive
approach, but instead use a mutual
block and two mutually recursive functions
to handle the alternation between santa and robo santa. The visitSanta
function will pass control to visitRobo
after executing its movement, and vise
versa.
We'll want to insert both present deliverer's locations in the recursive base
case, this may result in a duplicate location, but that's okay because
SortedSet
will only hold at most one of each item inserted into it.
In idris, there is a general requirement that values be defined before their use, a common feature of dependently typed languages, resulting from the fact that just having the type signature of a function/value alone is not always enough to perform type checking, as functions can appear as part of types, requiring evaluation of the function and making automatic dependency analysis effectively impossible.
Inside a mutual
block, elaboration behaves differently, elaborating types
first and then values in separate passes. This restricts what you can do a
little, but enables mutually recursive functions.
visitedLocations' : List Movement -> SortedSet (Integer, Integer)
visitedLocations' xs = visitSanta xs empty (0, 0) (0, 0)
where
mutual
visitSanta : (moves : List Movement) -> (set : SortedSet (Integer, Integer))
-> (santa, robo : (Integer, Integer)) -> SortedSet (Integer, Integer)
visitSanta [] set santa robo = insert santa . insert robo $ set
visitSanta (x :: xs) set santa robo =
visitRobo xs (insert santa set) (santa >+< vector x) robo
visitRobo : (moves : List Movement) -> (set : SortedSet (Integer, Integer))
-> (santa, robo : (Integer, Integer)) -> SortedSet (Integer, Integer)
visitRobo [] set santa robo = insert santa . insert robo $ set
visitRobo (x :: xs) set santa robo =
visitSanta xs (insert robo set) santa (robo >+< vector x)
Part Functions
Part 1
Similar to the previous day, we get our input, unpack it, and traverse our effectful movement parsing function over it, before feeding that into our solving function.
part1 : Eff (PartEff String) (Nat, List Movement)
part1 = do
input <- map (unpack . trim) $ askAt "input"
movements <- traverse parseMovement input
let locations = visitedLocations movements
pure (length locations, movements)
Part 2
Same as Part 1, but with a different solving function
part2 : (movements : List Movement) -> Eff (PartEff String) Nat
part2 movements = do
let locations = visitedLocations' movements
pure . length $ locations
Year 2015 Day 4
This day introduces us to a little bit of FFI, linking to openssl to use it's
MD5
functionality.
import Data.String
import Data.Bits
import System.FFI
import Prim.Array
FFI
We will be using openssl's MD5
function to do the hashing, so we will need to
provide a couple of foreign function definitions.
First, we'll provide one for the C standard library's strlen
. We could get
this value from within idris, but we'll do it through FFI to flesh out the
example a bit more.
The format of the specifier after %foreign
is "language:function name,library
name". Since strlen
is the c standard library, we use libc
as the library
name.
Idris will automatically convert primitive types like String
s and Int
s to
their C equivalents at the FFI boundary.
PrimIO
is a special, basic type of IO action based on linear types, we won't
be getting into the specifics right now.
By convention, primitive FFI function names start with prim__
in idris.
%foreign "C:strlen,libc"
prim__strlen : String -> PrimIO Int
Now we need one for the MD5
function from openssl, the openssl objectfile
calls itself libcrypto
, so we'll use that as the library name.
The MD5
function actually returns a pointer to the hash value, but since the
3rd argument is the pointer it expects to write the hash to, we already have
that value and we'll ignore it by having our prim__md5
return unit.
The second argument is the length of the input string in bytes.
%foreign "C:MD5,libcrypto"
prim__md5 : String -> Int -> Ptr Bits8 -> PrimIO ()
Now we glue these parts together in our higher level md5'
function.
We use malloc
from System.FFI
to allocate a buffer for prim__md5
to write
to. This returns an AnyPtr
, when we need a Ptr Bits8
, so we'll need to cast
it, using prim__castPtr
from the prelude.
We then use our prim__strlen
function to get the length of our input string
from within C, using primIO
from the prelude to "lift" the resulting
PrimIO Int
into our io
context.
We then feed the string, the calculated length, and our casted buffer into our
prim__md5
function, then use prim__getArrayBits8
from the
c-ffi library to extract each of the
bytes, passing them into a pure idris helper function to convert into hex
representation.
Since we are working with C FFI and performing a little bit of manual memory
management here, we must remember to free
our pointer, and then we can splice
together our output string and return.
md5' : HasIO io => String -> io String
md5' str = do
buffer <- malloc 16
len <- primIO $ prim__strlen str
raw_md5 <- primIO $ prim__md5 str len (prim__castPtr buffer)
let bytes = map (\x => prim__getArrayBits8 (prim__castPtr buffer) x) [0..15]
free buffer
pure . joinBy "" . map byteToHex $ bytes
where
hexDigit : Bits8 -> Char
hexDigit n =
if n < 10
then chr (cast n + ord '0')
else chr (cast (n - 10) + ord 'a')
byteToHex : Bits8 -> String
byteToHex n =
pack [hexDigit (n `shiftR` 4), hexDigit (n .&. 0xF)]
Now, because foreign functions return a PrimIO
by default, our md'
still
requires some sort of HasIO
(like IO). This is undesirable here, after all,
MD5 is a hash function, so it really ought to behave like a pure function. We
know from taking a careful look at what FFI functions we are invoking that we
aren't altering any global state.
Idris has an escape hatch for these types of situations, unsafePerformIO
. This
function is, as the name suggests, quite unsafe, so it shouldn't be used if
you don't fully understand the implications, and only sparingly at that, but
doing FFI to C is an intrinsically unsafe operation, so it's morally okay to use
here, as long as we are careful to not violate referential transparency or type
safety. Haskell programmers should note that unsafePerformIO
is much easier
to use correctly in Idris, largely as a result of Idris being strict by default.
Finally, we'll construct our top level md5
function, wrapping md5'
in
unsafePerformIO
md5 : String -> String
md5 str = unsafePerformIO $ md5' str
Solver functions
Count the leading zeros in a string
There is a more clever way to do this making use of a "view" strings have for interacting with them as if they were lazy lists, but we've already covered so much here that we'll save that one for another time and just do it the straight forward way, having a top level function that accepts a string, turns it into a list of chars, and then pass it into a tail recursive helper function to actually count the zeros.
countZeros : String -> Nat
countZeros str = countZeros' (unpack str) 0
where
countZeros' : (xs : List Char) -> (acc : Nat) -> Nat
countZeros' [] acc = acc
countZeros' ('0' :: xs) acc = countZeros' xs (acc + 1)
countZeros' (x :: xs) acc = acc
Count the number of zeros for a specific input
Concatenate on our number to our secret key, hash it, and count the zeros
checkZeros : (key : String) -> (number : Nat) -> Nat
checkZeros key number =
let hash = md5 (key ++ show number)
in countZeros hash
Find the first input with a specific number of zeros
Search numbers starting at the provided seed and counting up in a tail recursive
helper function. This function is going to be effectively impossible to prove
totality for, so we have removed our normal %default total
annotation for this
file
findFirst : (zeros : Nat) -> (key : String) -> (seed : Nat) -> Nat
findFirst zeros key current =
if checkZeros key current >= zeros
then current
else findFirst zeros key (current + 1)
Part Functions
Part 1
Pass the input into our findFirst
function and use the result as both our
output and context value. Since part2 is searching for a longer run of 0s than
part 1, numbers already checked by part 1 can not possibly be valid solutions
for part 2, so we can skip them in part 2.
part1 : Eff (PartEff String) (Nat, Nat)
part1 = do
input <- map trim $ askAt "input"
let number = findFirst 5 input 0
pure (number, number)
Part 2
Start the findFirst
search from the point where part 1 left off.
part2 : Nat -> Eff (PartEff String) Nat
part2 seed = do
input <- map trim $ askAt "input"
let number = findFirst 6 input seed
pure number
Year 2015 Day 5
This day provides a nice chance to introduce
views,
specifically String
's
AsList
view, which lets us treat String
s as if they were lazy lists or iterators.
import Data.String
import Util
Nice Strings
Contains at least 3 vowels
First we must define what a vowel is
vowels : List Char
vowels = ['a', 'e', 'i', 'o', 'u']
Now define a function to check if a string contains a minimum number of vowels.
Here we make use of the with
rule and the AsList
view over strings.
The with
rule provides dependently typed pattern matching, allowing us to make
use of the fact that the form of some of the arguments can be determined by the
values of the others.
The AsList
view associates a String
with a lazy list of the characters in
the string by providing incremental proofs that the String
decomposes into the
lazy list. An AsList
of value Nil
/[]
proves that the associated string is
empty, and an AsList
of value c :: xs
proves that the associated string can
be generated by strCons
ing the character c
onto the rest of the string as
captured by the AsList
in xs
.
We use the asList
function, which generates the AsList
view for the provided
string, as the argument to the with
block. Functions like this that produce
views for a value are refereed to as "covering functions". In general, views are
dependently typed constructs whose value provides insight into the structure of
another value.
We implement this function by keeping a counter of the number of vowels we still
need to see to accept the string, returning true when that counter equals zero,
or false if we hit the end of the string (when the string argument is empty and
the associated AsList
is Nil
) while the counter is still non-zero.
If the counter is non-zero and the string is non-empty, we check the current c
character to see if it is a vowel, by checking to see if our vowels
list
contains it, then recurse, decrementing the counter if it was a vowel, and
leaving the counter unchanged if it wasn't.
The use of | xs
after the recursive call to containsVowels
tells Idris to
recurse directly to the with
block, allowing it to reuse the AsList
value in
xs
without having to generate the AsList
view again.
containsVowels : (count : Nat) -> String -> Bool
containsVowels count str with (asList str)
containsVowels 0 str | _ = True
containsVowels (S k) "" | [] = False
containsVowels (S k) (strCons c str) | (c :: xs) =
if any (== c) vowels
then containsVowels k str | xs
else containsVowels (S k) str | xs
At least one letter appears twice in a row
For this one, since we need to inspect two letters at a time, we break out a
nested with
block, with the tail of the AsList
as the argument of the block,
to also pattern match on the next character of the string.
Since the AsList
view is lazy, we need to use force
in the inner with block
to force the evaluation of the next component of the view, this is due to a
bug in the compiler.
The logic here is pretty simple, if we hit the empty string or get down to one
character, return False
, if we have two characters available, compare them,
returning True
if they match, and recursing to the outer with block, only
consuming one of the two characters, if they don't match.
doubleLetter : String -> Bool
doubleLetter str with (asList str)
doubleLetter "" | [] = False
doubleLetter (strCons c str) | (c :: x) with (force x)
doubleLetter (strCons c "") | (c :: x) | [] = False
doubleLetter (strCons c (strCons d str)) | (c :: x) | (d :: y) =
if c == d
then True
else doubleLetter (strCons d str) | x
No Invalid Substrings
First we define the substrings that are disallowed
disallowedSubstrs : List String
disallowedSubstrs = ["ab", "cd", "pq", "xy"]
Idris's standard library does not currently define a function to check if a
string is a substring of another, so we instead use our own isSubstr
function,
defined via use of the AsList
view, from our Util
module.
noDisallowed : String -> Bool
noDisallowed str =
not $ any (`isSubstr` str) disallowedSubstrs
isNice
Combine all three of our above functions together to check if a string is a "nice" string
isNice : String -> Bool
isNice str = containsVowels 3 str && doubleLetter str && noDisallowed str
New niceness
Pair of letters that appears twice
Use a similar approach to doubleLetter
above, but instead of comparing the two
chars for equality, pack them into a string and check if they are a substring of
the rest of the string.
pairTwice : String -> Bool
pairTwice str with (asList str)
pairTwice "" | [] = False
pairTwice (strCons c str) | (c :: x) with (force x)
pairTwice (strCons c "") | (c :: x) | [] = False
pairTwice (strCons c (strCons d str)) | (c :: x) | (d :: y) =
if pack [c, d] `isSubstr` str
then True
else pairTwice (strCons d str) | x
Letter that repeats after one letter
More of the same, but this one gets extra fun because we are matching 3 letters
at a time. This would be a lot nicer if we could get away without nesting the
with
blocks, but due to the aformentioned compiler bug, we need the nesting.
letterRepeatGap : String -> Bool
letterRepeatGap str with (asList str)
letterRepeatGap "" | [] = False
letterRepeatGap (strCons c str) | (c :: x) with (force x)
letterRepeatGap (strCons c "") | (c :: x) | [] = False
letterRepeatGap (strCons c (strCons d str)) | (c :: x) | (d :: y) with (force y)
letterRepeatGap (strCons c (strCons d "")) | (c :: x) | (d :: y) | [] = False
letterRepeatGap (strCons c (strCons d (strCons e str))) | (c :: x) | (d :: y) | (e :: z) =
if c == e
then True
else letterRepeatGap (strCons d (strCons e str)) | x
New isNice
Glue our two new niceness functions together
isNice' : String -> Bool
isNice' str = pairTwice str && letterRepeatGap str
Part Functions
Part 1
Split our input up into a list of its component strings, and then count how many satisfy the "nice" criteria.
part1 : Eff (PartEff String) (Nat, ())
part1 = do
strings <- map (lines . trim) $ askAt "input"
let x = count isNice strings
pure (x, ())
Part 2
Same as above, but with the new niceness criteria
part2 : () -> Eff (PartEff String) Nat
part2 _ = do
strings <- map (lines . trim) $ askAt "input"
let x = count isNice' strings
pure x
Year 2015 Day 6
Introduction to the advent of code classic 2d grid problem.
import Util
import Data.List.Lazy
import Data.List1
import Data.Vect
import Data.Fin
import Data.String
import Data.IORef
Parsing and data structures
Grid structure
Since this is our first 2d grid problem, we'll keep it simple and use a Vect
of Vect
s to store our problem state, we'll revisit a more complicated but more
efficient structure for storing a 2d Grid
in a later problem.
This alias adds 1 to each of its arguments to ensure non-emptyness.
Grid : (rows, cols : Nat) -> Type -> Type
Grid rows cols e = Vect (S rows) (Vect (S cols) e)
We also provide a type alias for the coordinates in this grid, a simple pair of
Fin
s.
Coord : (rows, cols : Nat) -> Type
Coord rows cols = (Fin (S rows), Fin (S cols))
Range extraction helpers
Convert a Vect
to a lazy list
vectToLazy : Vect n e -> LazyList e
vectToLazy [] = []
vectToLazy (x :: xs) = x :: vectToLazy xs
Extract an inclusive range of indexes from a Vect
as a lazy list
extractSegment : (start, end : Fin n) -> Vect n e -> LazyList e
extractSegment start end xs =
let xs = take (1 + finToNat end) . vectToLazy $ xs
in drop (finToNat start) xs
Command data structures
The three types of action that can be performed on a light.
data Action = On | Off | Toggle
The range of coordinates that a command affects
record Range (rows, cols : Nat) where
constructor MkRange
top_left, bottom_right : Coord rows cols
Helper function to extract a range of values from our Grid.
First extracts the range of rows this Range
touches, then maps a an extractor
for the range of columns it touches across them, before lazily concatenating the
resulting lists.
extractRange : Range rows cols -> Grid rows cols e -> LazyList e
extractRange (MkRange (x_start, y_start) (x_end, y_end)) xs =
let rs = extractSegment x_start x_end xs
cs = map (extractSegment y_start y_end) rs
in foldrLazy (\e, acc => e ++ acc) [] cs
An action and its associated range
record Command (rows, cols : Nat) where
constructor MkCmd
action : Action
range : Range rows cols
Parsing
Pattern match on the action string, throwing an error if the action was invalid
parseAction : Has (Except String) fs =>
(input : String) -> Eff fs Action
parseAction "on" = pure On
parseAction "off" = pure Off
parseAction "toggle" = pure Toggle
parseAction str = throw "Invalid action: \{str}"
Attempt to split the string into two parts on the comma, and then attempt to
parse the parts as Fin
s, throwing an appropriate error if anything goes wrong
parseCoord : Has (Except String) fs =>
{rows, cols : Nat} -> (input : String) -> Eff fs (Coord rows cols)
parseCoord input =
case split (== ',') input of
head ::: [] => throw "Pair only had one component: \{input}"
head ::: [tail] => do
x <- note "Invalid x coordinate: \{head}" $ parsePositive head
y <- note "Invalid y coordinate: \{tail}" $ parsePositive tail
pure (x, y)
head ::: xs => throw "Pair had \{show $ 1 + length xs} components: \{input}"
Parse two pairs together into a range
parseRange : Has (Except String) fs =>
{rows, cols : Nat} -> (pair1, pair2 : String) -> Eff fs (Range rows cols)
parseRange pair1 pair2 = do
top_left <- parseCoord pair1
bottom_right <- parseCoord pair2
pure $ MkRange top_left bottom_right
Split a string into a list of parts, pattern matching those parts to attempt to
extract a Command
.
parseCommand : Has (Except String) fs => Has Logger fs =>
{rows, cols : Nat} -> (input : String) -> Eff fs (Command rows cols)
parseCommand input = do
trace "Parsing command: \{input}"
case split (== ' ') input of
"toggle" ::: [pair1, "through", pair2] => do
range <- parseRange pair1 pair2
let cmd = MkCmd Toggle range
debug "Parsed \{show cmd} from: \{input}"
pure cmd
"turn" ::: [action, pair1, "through", pair2] => do
action <- parseAction action
range <- parseRange pair1 pair2
let cmd = MkCmd action range
debug "Parsed \{show cmd} from: \{input}"
pure cmd
_ => throw "Improper command: \{input}"
Problem structure
Since we are dealing with a million slots here, we'll want some level of true
mutability. The actual structure containing the slots doesn't need to be
modified once its setup, so we'll make the mutability interior to the slots and
keep a Grid
of IORef
s.
We'll setup a helper function to initialize our grid based on a seed value.
ioGrid : Has IO fs =>
(rows, cols : Nat) -> (seed : e) -> Eff fs $ Grid rows cols (IORef e)
ioGrid rows cols seed =
let grid : Grid rows cols _ = replicate _ (replicate _ (newIORef seed))
in traverse (traverse id) grid
Convert a Grid
of IORef
s into a Grid
of pure values by traversing the
readIORef
operation over our Grid
.
purify : Has IO fs =>
{rows, cols : Nat} -> Grid rows cols (IORef e) -> Eff fs $ Grid rows cols e
purify grid = traverse (traverse readIORef) grid
Solver Functions
Part 1 Variants
namespace Part1
Apply a given command to our Grid
of IORef
s.
Use our extractRange
function to extract all the IORef
s in the grid cells
touched by our Range
and then traverse an appropriate mutating action over
them.
applyCommand : Has IO fs =>
{rows, cols : Nat} -> Grid rows cols (IORef Bool) -> Command rows cols -> Eff fs ()
applyCommand xs (MkCmd action range) =
let cells = extractRange range xs
in case action of
On => Lazy.traverse_ (`writeIORef` True) cells
Off => Lazy.traverse_ (`writeIORef` False) cells
Toggle => Lazy.traverse_ (`modifyIORef` not) cells
Apply a list of commands to our Grid
of IORef
s, doing some debug logging
along the way.
export
applyCommands : Has IO fs => Has Logger fs =>
{rows, cols : Nat} -> Grid rows cols (IORef Bool) -> List (Command rows cols)
-> Eff fs ()
applyCommands grid xs = applyCommands' 0 (length xs) xs
where
applyCommands' : (idx, len : Nat) -> List (Command rows cols) -> Eff fs ()
applyCommands' idx len [] = pure ()
applyCommands' idx len (x :: xs) = do
debug "Part 1 - Applying command \{show idx}/\{show len}: \{show x}"
applyCommand grid x
applyCommands' (S idx) len xs
Part 2 Variants
namespace Part2
Much the same as above, but instead we apply the part 2 rules to a Grid
of
Nat
.
applyCommand : Has IO fs =>
{rows, cols : Nat} -> Grid rows cols (IORef Nat) -> Command rows cols -> Eff fs ()
applyCommand xs (MkCmd action range) =
let cells = extractRange range xs
in case action of
On => Lazy.traverse_ (`modifyIORef` (+ 1)) cells
Off => Lazy.traverse_ (`modifyIORef` (`minus` 1)) cells
Toggle => Lazy.traverse_ (`modifyIORef` (+ 2)) cells
Identical to above, except for using our part 2 applyCommand
. We can use the
same name here because we have the two variants behind namespaces and Idris can
disambiguate via the types.
export
applyCommands : Has IO fs => Has Logger fs =>
{rows, cols : Nat} -> Grid rows cols (IORef Nat) -> List (Command rows cols)
-> Eff fs ()
applyCommands grid xs = applyCommands' 0 (length xs) xs
where
applyCommands' : (idx, len : Nat) -> List (Command rows cols) -> Eff fs ()
applyCommands' idx len [] = pure ()
applyCommands' idx len (x :: xs) = do
debug "Part 2 - Applying command \{show idx}/\{show len}: \{show x}"
applyCommand grid x
applyCommands' (S idx) len xs
Day functions
Part 1
Parse our commands, generate our initial Grid
with all the lights turned off
(represented by False
), apply our commands to it, purify it, and count the
lights that are turned on.
Pass out our list of parsed commands as the context for reuse in part 2.
part1 : Eff (PartEff String) (Nat, List (Command 999 999))
part1 = do
input <- map (lines . trim) $ askAt "input"
commands <- traverse parseCommand input
grid <- ioGrid 999 999 False
applyCommands grid commands
grid <- purify grid
let lights_on = sum . map (count id) $ grid
pure $ (lights_on, commands)
Part 2
This time, use an initial Grid
with all brightness values at 0, apply our list
of preparsed commands using our part 2 applyCommands
function (selected via
the type signature), and then add up the brightnesses.
part2 : List (Command 999 999) -> Eff (PartEff String) Nat
part2 commands = do
grid <- ioGrid 999 999 (the Nat 0)
applyCommands grid commands
grid <- purify grid
let brightness = sum . map sum $ grid
pure brightness
Year 2015 Day 7
This day provides us a gentle introduction to dependent maps.
Per the problem specification, each wire can only be output to by one gate. To
encode this property in the type, we'll tag Gate
with the output wire in the
type, and then store our ciruct as a dependent map from wires to Gate
s. This
ensures that the circut only contains one gate outputting to each wire, and that
looking up a wire in the ciruct produces exactly the gate that outputs to it
through type checking.
Ensuring that the input contains only one gate outputting for each wire is done through throwing a runtime error in the parsing function if a second gate outputting to a given wire is found in the input.
import Data.Bits
import Data.Fin
import Data.String
import Data.List1
import Data.SortedMap
import Data.SortedMap.Dependent
import Decidable.Equality
Parsing and data structures
Problem structure
Define type aliases for wires and literals, and specify that an input can be either a literal or a wire.
Wire : Type
Wire = String
Literal : Type
Literal = Bits16
Input : Type
Input = Either Literal Wire
Description of a Gate, tagged in the type with the name of the output wire.
This creates what is referred to as an "indexed type family", in this case a
family of Gate
types indexed by values of type Wire
.
data Gate : Wire -> Type where
Constant : (x : Input) -> {output : Wire} -> Gate output
And : (x, y : Input) -> {output : Wire} -> Gate output
Or : (x, y : Input) -> {output : Wire} -> Gate output
Not : (x : Input) -> {output : Wire} -> Gate output
LShift : (x : Input) -> (y : Index {a = Literal}) -> {output : Wire}
-> Gate output
RShift : (x : Input) -> (y : Index {a = Literal}) -> {output : Wire}
-> Gate output
Parsing functions
Parse a shift value, making sure its bounded properly to index a Bits16
parseShift : Has (Except String) fs => String -> Eff fs (Index {a = Literal})
parseShift str =
note "Invalid shift: \{str}" $ parsePositive str
Make sure that this string consists only of lowercase letters
parseWire : Has (Except String) fs => String -> Eff fs Wire
parseWire str =
if all isLower (unpack str)
then pure str
else throw "Invalid wire \{str}"
Parse either a wire or a literal
parseInput : Has (Except String) fs => String -> Eff fs Input
parseInput str =
case parsePositive str of
Nothing => map Right . parseWire $ str
Just x => pure $ Left x
Split the string into components and pattern match on them to extract a gate
parseGate : Has (Except String) fs => Has Logger fs =>
String -> Eff fs (wire : Wire ** Gate wire)
parseGate str = do
debug "Parsing gate: \{str}"
case split (== ' ') str of
x ::: ["->", output ] => do
x <- parseInput x
output <- parseWire output
pure (output ** Constant x)
"NOT" ::: [x, "->", output] => do
x <- parseInput x
output <- parseWire output
pure (output ** Not x)
x ::: ["AND", y, "->", output] => do
x <- parseInput x
y <- parseInput y
output <- parseWire output
pure (output ** And x y)
x ::: ["OR", y, "->", output] => do
x <- parseInput x
y <- parseInput y
output <- parseWire output
pure (output ** Or x y)
x ::: ["RSHIFT", y, "->", output] => do
x <- parseInput x
y <- parseShift y
output <- parseWire output
pure (output ** RShift x y)
x ::: ["LSHIFT", y, "->", output] => do
x <- parseInput x
y <- parseShift y
output <- parseWire output
pure (output ** LShift x y)
_ => throw "Invalid Gate: \{str}"
Break the input into lines, traverse parseGate
over the lines, and collect the
results into our circut DMap.
The type of a value in a SortedDMap
depends on the value of the key that
refers to it, in the type constructor SortedDMap k v
, v
is of type
k -> Type
, this is the function the map calls to calculate the type of the
value from the value of the key. This, not so coincidentally, is the type
signature of our Gate
type constructor, which we can slot in here to get a map
from a wire to the gate outputting to that wire, preventing us from accidentally
associating a wire to the wrong gate.
parseGates : Has (Except String) fs => Has Logger fs =>
String -> Eff fs (SortedDMap Wire Gate)
parseGates input = do
gates : List (wire : Wire ** Gate wire) <-
traverse parseGate . lines . trim $ input
foldlM insertGate empty gates
where
insertGate : SortedDMap Wire Gate -> (wire : Wire ** Gate wire)
-> Eff fs $ SortedDMap Wire Gate
insertGate map (wire ** gate) =
if isJust $ lookup wire map
then throw "Duplicate gate for \{wire}"
else pure $ insert wire gate map
Solver Functions
Recursively solve for the value on a wire, keeping a cache of already solved wires
covering
solveWire : Has (Except String) fs => Has (State (SortedMap Wire Literal)) fs =>
Has (Reader (SortedDMap Wire Gate)) fs =>
(wire : Wire) -> Eff fs Literal
solveWire wire = do
-- Short circut if we are already in the cache
Nothing <- map (lookup wire) get
| Just cached => pure cached
-- Find the gate that outputs to this wire in our circut
Just gate <- map (lookupPrecise wire) ask
| _ => throw "No output for wire \{wire}"
-- Recursively solve for the inputs and apply this gate's operation
res : Literal <- case gate of
Constant x => fromInput x
And x y => do
x <- fromInput x
y <- fromInput y
pure $ x .&. y
Or x y => do
x <- fromInput x
y <- fromInput y
pure $ x .|. y
Not x => do
x <- fromInput x
pure $ complement x
LShift x y => do
x <- fromInput x
pure $ x `shiftL` y
RShift x y => do
x <- fromInput x
pure $ x `shiftR` y
-- Add this gate's output to the cache
modify $ insert wire res
pure res
where
fromInput : (i : Input) -> Eff fs Literal
fromInput (Left x) = pure x
fromInput (Right x) = solveWire x
Part Functions
Part 1
Parse the input, then feed it and an initial empty cache into our solveWire
function, passing the produced value as the output and the circut, represented
as a dependent map from wires to gates, as well as the output signal from wire
'a' as the context for part 2.
covering
part1 : Eff (PartEff String) (Bits16, (SortedDMap Wire Gate, Literal))
part1 = do
circut <- askAt "input" >>= parseGates
(value, _) <-
runState empty . runReader circut $ solveWire "a"
{fs = State (SortedMap Wire Literal)
:: Reader (SortedDMap Wire Gate)
:: PartEff String }
pure (value, (circut, value))
Part 2
Override the value for the 'b' wire to the output from the 'a' wire in part 1, then rerun our calcuation to find the new output for the 'a' wire.
covering
part2 : (SortedDMap Wire Gate, Literal) -> Eff (PartEff String) Bits16
part2 (circut, value) = do
let circut = insert "b" (Constant (Left value)) circut
(value, _) <-
runState empty . runReader circut $ solveWire "a"
{fs = State (SortedMap Wire Literal)
:: Reader (SortedDMap Wire Gate)
:: PartEff String }
pure value
Year 2015 Day 8
This day provides a more in depth introduction to writing effectful parsers, making use of state and non-determinism in our parsers.
import Data.String
import Data.Vect
import Data.List.Lazy
import Data.Either
Parsing
A "Parser" is an effectful computation that has access to a list of chars as
state, can throw exceptions of type String
, and has non-determinism through
the Choose
effect, which consumes some of the state to potentially return a
value.
Basic operations
Get the next character out of the parser state, updating the state to consume that character.
nextChar : Has (State (List Char)) fs => Has (Except String) fs => Eff fs Char
nextChar = do
c :: rest <- get
| [] => throw "End of input"
put rest
pure c
Attempt to parse a single character based on the supplied predicate, consuming the character from the state and throwing the provided error if the predicate does not hold over the consumed character.
char : Has (State (List Char)) fs => Has (Except String) fs =>
(pred : Char -> Bool) -> (err : Char -> String) -> Eff fs Char
char pred err = do
c <- nextChar
unless (pred c) (throw $ err c)
pure c
Type alias for a parser
Parser : (res : Type) -> Type
Parser res = Eff [State (List Char), Except String, Choose, Logger] res
Parse 0+ of a thing, speculatively attempting to apply the given parser. In the event the supplied parser fails, catch the error, unwind changes to the state, and return an empty list, otherwise append the parsed element to the head of the returned list and recurse.
The rewinding of the state on failure could be handled implicitly by the
effect stack if Except
was structured a bit differently, but that's a topic
for another day.
many : (f : Parser t) -> Parser (List t)
many f = do
state <- get
Just x <- lift $ catch (\ _ => pure Nothing) (map Just f)
| Nothing => do
put state
pure []
map (x ::) $ many f
"Parse" the end of the input, returning a unit if we are at the end of input, throwing an error otherwise.
endOfInput : Parser ()
endOfInput = do
[] <- get
| xs => throw "Expected end of input, state non empty: `\{pack xs}`"
pure ()
Character Classes
Parse a single "
, throwing an error if the current character is anything else.
Returns a unit since there is only one possible character this parses, and this
avoids us having to discard the character later in our string
parser.
quote : Parser ()
quote = do
_ <- char (== '"') (\x => "Expected `\"`, got `\{String.singleton x}`")
pure ()
Parse any character except \
or "
normal : Parser Char
normal =
char
(\x => not $ any (== x) (the (List _) ['\\', '"']))
(\x => "Expected normal, got `\{String.singleton x}`")
Escaped Characters
Parse the character sequence \"
, returning just the "
. Despite the fact that
can only return one possible character, like quote
above, we return the parsed
character, as we intend to provide all the escaped character parsers to the
oneOfM
combinator later.
eQuote : Parser Char
eQuote = do
_ <- char (== '\\') (\x => "Expected `\\`, got `\{String.singleton x}`")
char (== '"') (\x => "Expected `\"`, got `\{String.singleton x}`")
Parse the character sequence \
, returning just the second \
.
eSlash : Parser Char
eSlash = do
_ <- char (== '\\') (\x => "Expected `\\`, got `\{String.singleton x}`")
char (== '\\') (\x => "Expected `\\`, got `\{String.singleton x}`")
Convert a lowercase hexadecimal digit to its numerical value.
fromHex : Char -> Int
fromHex c =
if ord c >= 97
then ord c - 87
else ord c - 48
Parse a character sequence \xAB
, where AB
are hexadecimal digits, and decode
the numerical value of AB
, as a hexadecimal number, into its corresponding
character.
eHex : Parser Char
eHex = do
_ <- char (== '\\') (\x => "Expected `\\`, got `\{String.singleton x}`")
_ <- char (== 'x') (\x => "Expected `x`, got `\{String.singleton x}`")
[x, y] <- map (map $ fromHex . toLower) .
sequence .
Vect.replicate 2 $
(char
isHexDigit
(\x => "Expected hex digit, got `\{String.singleton x}`"))
pure . chr $ x * 0x10 + y
Use the oneOfM
combinator to combine our escaped character parsers into a
single character class. oneOfM
uses the Choice
effect to try all of the
provided parsers, conceptually in parallel, returning all of the results.
escaped : Parser Char
escaped = oneOfM $ the (List _) [eQuote, eSlash, eHex]
Top Level Class
Combine our normal
and escaped
parsers into a single parser for non-quote
characters.
character : Parser Char
character = oneOfM $ the (List _) [normal, escaped]
Parse a string literal by describing its layout as a quote ("
), followed by
any number of characters, then another quote, followed by the end of input.
Return the characters between the outer quotes.
string : Parser (List Char)
string = do
quote
xs <- many character
quote
endOfInput
pure xs
Running a parser
Run a parser, with some debug logging, by peeling the parsing effects of of the type. The order is important here, remember that function composition "runs" right to left.
We peel the state off the type first, so that we can get implicit "rewinding" of
the state inside of our combinators, like oneOfM
.
For a full understanding of what's going on here, we need to see how the type
signature changes as we peel effects off the type, you can uncomment the
commented lines, and comment out the rest of the function, modifying the
let output =
line to follow along yourself, though the types you get may look
a little different due to idris evaluating type alaises like Eff
for you, I am
keeping them in aliased forms, and excluding Logger
, which doesn't impact the
semantic of parsing, to keep the examples concise:
-
runstate (unpack str) $ x
This produces a value with type
Eff [Except String, Choose] (List Char, List Char)
. In this tuple of values, the first element is the actual output of the parser, and the second element is the state after the parser has run. -
runExcept . runState (unpack str) $ x
This produces a value with type
Eff [Choose] (Either String (List Char, List Char))
. This corresponds to a computation that either returns our tuple of output and state from before, or an error. Important to note here is that, in the error case, we only return aString
(our error type), our state is discarded. -
runChoose {f = LazyList} . runExcept . runState (unpack str) $ x
The
Choose
effect, works with any type that implements theAlternative
interface, and the choice of type impacts the semantics. A full discussion of this is beyond the scope for today, but we chose to "run"Choose
withLazyList
, which effectively gives us an iterator over all the possible parsings of our input text.This produces a value with type
Eff [] (LazyList (Either String (List Char, List Char)))
. When we hit an application ofChoose
, likeoneOfM
, all possibilities will be tried and each one will be added to our outputLazyList
. Because this is aLazyList
, and not aList
, only values we actually inspect are generated, allowing parsing to terminate after the first successful parse without having to generate the rest of the list.Note that we have an independent possible state value for each slot in the list, this speaks to this effect stack, when run in this order, providing a sort of branching behavior for states, allowing different branches in the
Choose
effect to modify their state without impacting the state of other branches.
From there, we run our lazyRights
helper function over the outputs LazyList
to discard parsing paths that result in an error, extract the first element of
each tuple, get the head of the list, if one still exists, and use pack
to
convert the contents of the resulting Maybe (List Char)
to a string. Then a
little bit of debug logging, and return the output.
runParser : Has Logger fs => Parser (List Char) -> String
-> Eff fs $ Maybe String
runParser x str = do
info "Parsing: \{str}"
-- let outputs =
-- runChoose {f = LazyList} . runExcept . runState (unpack str) $ x
-- ?parser_types
outputs <-
lift . runChoose {f = LazyList} . runExcept . runState (unpack str) $ x
let output = map pack . head' . map fst . lazyRights $ outputs
case output of
Nothing =>
debug "Failed: \{show outputs}"
Just y => do
debug "Got: \{y}"
trace "\{show outputs}"
pure output
where
lazyRights : LazyList (Either a b) -> LazyList b
lazyRights [] = []
lazyRights (Left _ :: xs) = lazyRights xs
lazyRights (Right x :: xs) = x :: lazyRights xs
Escaping characters
This is much more boring the the parsing, we just do simple recursive pattern
matching on the characters of the provided string, escaping "
and \
, and
surround the resulting string with quotes.
escape : String -> String
escape str = "\"\{pack . escape' . unpack $ str}\""
where
escape' : List Char -> List Char
escape' [] = []
escape' ('"' :: xs) = '\\' :: '"' :: escape' xs
escape' ('\\' :: xs) = '\\' :: '\\' :: escape' xs
escape' (x :: xs) = x :: escape' xs
Part Functions
Part 1
Convert the inputs into a list of lines, traverse our parser over it, deal with
possible failures by sequencing the List (Maybe String)
into a
Maybe (List String)
, and the compute the difference in character counts.
part1 : Eff (PartEff String) (Int, List String)
part1 = do
inputs <- map lines $ askAt "input"
outputs <- traverse (runParser string) inputs
Just outputs <- pure $ sequence outputs
| _ => throw "Some strings failed to parse"
let difference =
sum $ zipWith (\x, y => strLength x - strLength y) inputs outputs
pure (difference, inputs)
Part 2
Map our character escaping function over our input string and compute the difference in character counts.
Make a little stop along the way to ensure that escape -> parse round trips without changing the content of the string.
part2 : List String -> Eff (PartEff String) Int
part2 inputs = do
let outputs = map escape inputs
Just reversed_outputs <- map sequence . traverse (runParser string) $ outputs
| _ => throw "Reversing escaping of the inputs failed"
unless (all id $ zipWith (==) inputs reversed_outputs) $ do
debug . delay . joinBy "\n" . map show . filter (\(x, y, z) => x /= z) $
zip3 inputs outputs reversed_outputs
throw "Parsed outputs were not identical to inputs"
let difference =
sum $ zipWith (\x, y => strLength y - strLength x) inputs outputs
pure difference
Year 2015 Day 9
This day provides our first example of a graph traversal problem. We'll use a
Choose
based effectful breath first search to identify all the possible paths
meeting the problem criteria, and then figure out the length in a separate step.
This isn't a particularly efficient solution, but its good enough for this small
problem size.
import Data.String
import Data.List1
import Data.Vect
import Data.List.Lazy
import Data.SortedMap
import Data.SortedSet
import Util
Parsing and data structures
Parsing
Define a location pair as a collection of two locations and a distance between
them (as this graph is undirected), as well as some type aliases, a contains
method to test if a pair covers a given location, and implement Foldable
for
the type to get access to the standard library utility methods.
DistanceMap
serves as a slightly fancy adjacency list for our graph
representation. We'll load this into a Reader
effect to avoid having to pass
it around as an explicit argument to every function later on once it's been
computed.
Location : Type
Location = String
Distance : Type
Distance = Integer
record LocationPair (l : Type) where
constructor MkLP
locations : Vect 2 l
distance : Integer
%name LocationPair lp, mp, np, op
contains : Eq l => l -> LocationPair l -> Bool
contains x lp =
case find (== x) lp.locations of
Nothing => False
Just _ => True
Foldable LocationPair where
foldl f acc lp = foldl f acc lp.locations
foldr f acc lp = foldr f acc lp.locations
toList lp = toList lp.locations
LP : Type
LP = LocationPair Location
DistanceMap : Type
DistanceMap = SortedMap Location (List LP)
Perform simple, pattern matching based parsing of a location pair.
parseLocationPair : Has (Except String) fs =>
String -> Eff fs LP
parseLocationPair str = do
from ::: ["to", to, "=", dist] <- pure $ split (== ' ') str
| _ => throw "Badly formatted input string: \{str}"
dist <- note "Bad distance: \{dist}" $ parsePositive dist
pure $ MkLP [from, to] dist
Convert a list of pairs to a DistanceMap
, a mapping from a location to all the
location pairs that cover it. To keep this function readable, we do this inside
an inner function with the State DistanceMap
effect.
Since the inner function is recursive, instead of extracting the result after
running the state effect, we lift it into an unconstrained fs
so we can borrow
the stack-saftey from our runner's top level runEff
.
pairsToMap : List LP -> Eff fs DistanceMap
pairsToMap lps = do
(_, out) <- lift . runState empty $ insertPairs lps
pure out
where
insertLoc : LP -> (loc : Fin 2) -> Eff [State DistanceMap] ()
insertLoc lp loc = do
let loc = (index loc lp.locations)
list <- map (lookup loc) get
case list of
Nothing => modify $ insert loc [lp]
Just xs => modify $ insert loc (lp :: xs)
insertPair : LP -> Eff [State DistanceMap] ()
insertPair lp = traverse_ (insertLoc lp) (the (List _) [0, 1])
insertPairs : List LP -> Eff [State DistanceMap] ()
insertPairs lps = traverse_ insertPair lps
Solver functions
Generate all the possible paths originating from current
with length depth
,
which is set to the number of locations in the calling function to provide the
"visits each node once" property.
The path
argument accumulates the path taken by this particular branch of the
breadth first search. It is accumulated in reverse order, but this doesn't
really matter as this graph is undirected.
We use the Choose
effect to explore every possible link connecting to the
current node, after filtering the possible links for ones that satisfy the
problem constraints.
paths : Has (Reader DistanceMap) fs => Has Choose fs => Has Logger fs =>
(path : List Location)
-> (depth : Nat)
-> (current : Location)
-> Eff fs (List Location)
paths path 0 current = empty
paths path 1 current =
if contains current path
then empty
else pure $ current :: path
paths path (S depth') current = do
debug "Paths from \{current} at \{show (S depth')} having visited \{show path}"
-- If we are in the path list, we have revisited this node, bail
False <- pure $ contains current path
| _ => empty
-- Update our path list to contain ourself
let path = current :: path
-- Find our next possible steps, bailing out if there are none
Just nexts <- map (lookup current) ask
| _ => do empty
-- Convert it to a list of locations
let nexts = concat . map toList $ nexts
-- Filter out any that are already in the path, this would be covered by the
-- check at the start of this function, but this cleans up the logs
let nexts = filter (not . (flip contains) path) nexts
-- Select our next node
next <- oneOf nexts
paths path depth' next
Calculate all the possible paths that satisfy the problem constraints.
As a minor optimization, we only explore the paths leading out of a single arbitrary location to start with, and then take advantage of a symmetry of the problem, collecting all the "rotations" of each output path to cheaply calculate all the paths that take all the same steps as our "seed" path, but have different starting and ending locations.
allPaths :
Has (Reader DistanceMap) fs => Has Logger fs => Has (Except String) fs =>
Eff fs $ SortedSet (List Location)
allPaths = do
locs <- map keys ask
first <- note "No locations" $ head' locs
paths <- lift . runChoose {f = LazyList} $
paths [] (length locs) first {fs = [Choose, Reader _, Logger]}
pure $ buildSet paths empty
where
insertPaths : List (List Location) -> (acc : SortedSet (List Location))
-> SortedSet (List Location)
insertPaths [] acc = acc
insertPaths (x :: xs) acc = insertPaths xs (insert x acc)
buildSet : LazyList (List Location) -> (acc : SortedSet (List Location))
-> SortedSet (List Location)
buildSet [] acc = acc
buildSet (x :: xs) acc =
let rots = rotations x
in buildSet xs (insertPaths rots acc)
Calculate the length of a path by recursively destructuring the list and looking
up each pair of locations in our DistanceMap
.
pathLen : Has (Reader DistanceMap) fs => Has (Except String) fs =>
(path : List Location) -> Eff fs Integer
pathLen [] = pure 0
pathLen (x :: []) = pure 0
pathLen (x :: (y :: xs)) = do
lps <- map (lookup x) ask >>= note "Not found in distance map: \{x}"
lp <- note "Pair not found \{x} \{y}" $
find (\l => contains x l && contains y l) lps
map (lp.distance +) $ pathLen (y :: xs)
Calculate the shortest path from a list of paths by recursively comparing lengths.
shortestPath : Has (Reader DistanceMap) fs => Has (Except String) fs =>
(paths : List (List Location, Integer)) -> Eff fs (List Location, Integer)
shortestPath paths = do
shortest paths Nothing
where
shortest : List (a, Integer) -> (acc : Maybe (a, Integer))
-> Eff fs (a, Integer)
shortest [] acc = note "No paths" acc
shortest (x :: xs) acc =
case acc of
Nothing => shortest xs (Just x)
Just y =>
if (snd x) < (snd y)
then shortest xs (Just x)
else shortest xs (Just y)
Calculate the longest path from a list of paths by recursively comparing lengths.
longestPath : Has (Reader DistanceMap) fs => Has (Except String) fs =>
(paths : List (List Location, Integer)) -> Eff fs (List Location, Integer)
longestPath paths = do
longest paths Nothing
where
longest : List (a, Integer) -> (acc : Maybe (a, Integer))
-> Eff fs (a, Integer)
longest [] acc = note "No paths" acc
longest (x :: xs) acc =
case acc of
Nothing => longest xs (Just x)
Just y =>
if (snd x) > (snd y)
then longest xs (Just x)
else longest xs (Just y)
Part Functions
Part 1
Parse our locations, turn them into a DistanceMap
, load that into our reader,
generate our paths, then find the one with the shortest length.
part1 :
Eff (PartEff String) (Integer, (DistanceMap, List (List Location, Integer)))
part1 = do
locations <- map lines (askAt "input") >>= traverse parseLocationPair
info "Locations: \{show locations}"
distance_map <- pairsToMap locations
info "Distance map: \{show distance_map}"
runReader distance_map {fs = Reader DistanceMap :: PartEff String} $ do
paths <- map Prelude.toList allPaths
trace "Paths: \{show paths}"
info "Found \{show $ length paths} paths"
distances <- traverse pathLen paths
let pairs = zip paths distances
(path, len) <- shortestPath pairs
info "Shortest Path: \{show path} \{show len}"
pure (len, (distance_map, pairs))
Part 2
Feed the locations back into a longest path function
part2 : (DistanceMap, List (List Location, Integer))
-> Eff (PartEff String) Integer
part2 (distance_map, pairs) = do
runReader distance_map {fs = Reader DistanceMap :: PartEff String} $ do
(path, len) <- longestPath pairs
info "Longest Path: \{show path} \{show len}"
pure len
Year 2015 Day 10
This day doesn't really add anything new, but we will show off our new views for viewing integers as lists of digits.
import Data.String
import Data.List1
import Data.List.Lazy
import Data.Monoid.Exponentiation
import Data.Nat.Views
import Util
import Util.Digits
Solver Functions
Produce a lazy lists of the digits of a number, in descending order of
significance. This effectively translates our new
Descending
view to a LazyList
.
lazyDigits : Integer -> LazyList Integer
lazyDigits i with (descending i)
lazyDigits i | (NegDec rec) = lazyDigits _ | rec
lazyDigits 0 | Start = []
lazyDigits ((digit * (10 ^ magnitude)) + rest) | (Prev _ digit rest rec) =
digit :: lazyDigits _ | rec
Apply the look-and-say rule to list of digits. We operate in the list-of-digits
space for efficiency, this number will grow into the millions of digits, and
Idris is currently lacking some needed primitive operations to perform this
operation in Integer
space reasonably efficiently. A LazyList
is used here
to avoid having to actually instantiate the entirety of these reasonably large
lists.
lookAndSay : LazyList Integer -> LazyList Integer
lookAndSay digits =
-- Flatten the list once more
lazyConcat
-- Convert the produced numbers into lists of their digits
. map lazyDigits
-- re-flatten our list
. lazyConcat
-- Count the number of occurrences of each digit and emit [occurances, digit]
. map (\xs@(head ::: tail) =>
(the (LazyList _) [natToInteger $ length xs, head]))
-- Group identical digits
. lazyGroup
$ digits
Apply the look-and-say rule to an integer, for repl testing
lookAndSay' : Integer -> Integer
lookAndSay' i =
let digits = lazyDigits i
res = lookAndSay digits
in unDigits res 0
where
unDigits : LazyList Integer -> (acc : Integer) -> Integer
unDigits [] acc = acc
unDigits (x :: xs) acc = unDigits xs (acc * 10 + x)
Repeatedly apply lookAndSay
to a seed value, with logging
repeatLogged : Has Logger fs =>
(count : Nat) -> (seed : LazyList Integer) -> Eff fs $ LazyList Integer
repeatLogged 0 seed = pure seed
repeatLogged (S k) seed = do
trace "Remaining iterations: \{show (S k)} digits: \{show . count (const True) $ seed}"
repeatLogged k (lookAndSay seed)
Part Functions
Part 1
Parse our input, convert it into a list of digits, then run our lookAndSay
function on it 40 times, and count the output digits.
part1 : Eff (PartEff String) (Nat, LazyList Integer)
part1 = do
input <- askAt "input" >>= (note "Invalid input" . parsePositive)
let input = lazyDigits input
info "Input: \{show input}"
output <- repeatLogged 40 input
pure (count (const True) output, output)
Part 2
Same thing, but 10 more times
part2 : (digits : LazyList Integer) -> Eff (PartEff String) Nat
part2 digits = do
output <- repeatLogged 10 digits
pure $ count (const True) output
Year 2015 Day 11
This day provides a gentle introduction to refinement types, types which augment other types with a predicate that must hold for all the values of the refined type, which allow easily defining types as subsets of other types based on some property of the acceptable elements.
While refinement types are quite easy to implement in Idris, and we easily could
implement the one we need for today's task as a throw away data structure just
for this module, we will be using the refined
1 library's implementation for
the sake of getting on with it.
import Data.Vect
import Data.String
import Data.Refined.Char
import Util
Data Structures and Parsing
Provide a predicate which establishes that a char is a lowercase alphabetic
character, the only type of character that passwords are allowed to contain. We
use the FromTo
predicate from the refined
1 library to restrict chars to
within the range from a
to z
.
This predicate has multiplicity 0, a full discussion of multiplicites and linear types is out of scope for today, but put simply this enforces that a value of this predicate type can be "used" at most 0 times, having the effect of erasing the value at runtime, making this more or less a zero-cost abstraction.
0 IsPasswordChar : Char -> Type
IsPasswordChar = FromTo 'a' 'z'
Combine a Char
with its corresponding IsPasswordChar
predicate into a
combined "refined" type, whose elements are the subset of Char
s that are
lowercase alphabetic characters.
record PasswordChar where
constructor MkPC
char : Char
{auto 0 prf : IsPasswordChar char}
%name PasswordChar pc
A function to fallible convert Char
s into refined PasswordChar
s, this will
return Just
if the Char
satisfies the predicate, and Nothing
otherwise,
aligning with the type-level guarantees of the PasswordChar
type.
refineChar : Char -> Maybe PasswordChar
refineChar c = map fromSubset $ refine0 c
where
fromSubset : Subset Char IsPasswordChar -> PasswordChar
fromSubset (Element char prf) = MkPC char
Convenience function returning a
as a PasswordChar
lowest : PasswordChar
lowest = MkPC 'a'
"Increment" a PasswordChar
, changing it to the next letter (a
becomes b
,
b
becomes c
, so on), returning nothing if we go past z
, corresponding to a
carry.
We do this by converting the internal Char
to an integer, adding one to it,
then converting back to a Char
. This low-level conversion loses the refinement
context, forcing us to call refineChar
on the new value to bring it back into
the refined type, providing us type-level assurance that this function will
return Nothing
if an overflow occurs.
incriment : PasswordChar -> Maybe PasswordChar
incriment (MkPC char) =
let next = chr $ ord char + 1
in refineChar next
A Password
is a Vect
of 8 PasswordChar
s. This Vect
is sorted in reverse
order compared to the String
it corresponds to, with the right-most letter
first, to make implementing the incrimentPassword
function a little easier and
cleaner.
We also provide conversion to/from a String
Password : Type
Password = Vect 8 PasswordChar
parsePassword : Has (Except String) fs => String -> Eff fs Password
parsePassword str = do
cs <- note "Password has incorrect number of characters: \{str}"
. toVect 8 . reverse . unpack $ str
cs <- note "Password contained invalid characters: \{str}"
$ traverse refineChar cs
pure cs
passwordToString : Password -> String
passwordToString = pack . toList . reverse . map char
Define a function to increment a Password
, this function will "roll over",
producing aaaaaaaa
if provided zzzzzzzz
.
incrimentPassword : Vect n PasswordChar -> Vect n PasswordChar
incrimentPassword [] = []
incrimentPassword (x :: xs) =
case incriment x of
Nothing => lowest :: incrimentPassword xs
Just x => x :: xs
Password validity
A password must contain a run of at least 3 incrementing characters, check this
by converting the PasswordChar
s to their integer representations. Remember
that our Password
Vect
is backwards compared to the string representation.
incrimentingChars : Vect n PasswordChar -> Bool
incrimentingChars (z :: next@(y :: (x :: xs))) =
let [x, y, z] : Vect _ Int = map (ord . char) [x, y, z]
in if y == x + 1 && z == y + 1
then True
else incrimentingChars next
incrimentingChars _ = False
A password may not contain i
, o
, or l
noInvalidChars : Vect n PasswordChar -> Bool
noInvalidChars = not . (any (flip contains $ ['i', 'o', 'l'])) . map char
A password contains at least two different non-overlapping pairs.
We check this by pattern matching our password two characters at a time,
consuming both characters if a matched pair is found, and tacking on the Char
the list is composed of to an accumulator list as we go. This list is then
reduced to only its unique elements (it's nub
), and checking to see if it's
length is at least 2.
containsPairs : Vect n PasswordChar -> Bool
containsPairs xs = length (nub $ pairs (reverse xs) []) >= 2
where
pairs : Vect m PasswordChar -> (acc : List Char) -> List Char
pairs [] acc = acc
pairs (x :: []) acc = acc
pairs (x :: (y :: xs)) acc =
if x == y
-- If there is a pair, consume it to prevent detecting overlapping pairs
then pairs xs (x.char :: acc)
-- If there isn't a pair, only consume one character
else pairs (y :: xs) acc
Combine our password criteria into one function
passwordCriteria : Vect n PasswordChar -> Bool
passwordCriteria xs = incrimentingChars xs && noInvalidChars xs && containsPairs xs
Find the next password
Find the next password based on a criteria function
findNextPassword :
(f : Vect n PasswordChar -> Bool) -> (password : Vect n PasswordChar)
-> Vect n PasswordChar
findNextPassword f password =
let next = incrimentPassword password
in if f next
then next
else findNextPassword f next
Part Functions
Part 1
part1 : Eff (PartEff String) (String, Password)
part1 = do
input <- map trim $ askAt "input"
password <- parsePassword input
info "Starting password: \{show password} -> \{passwordToString password}"
let next_password = findNextPassword passwordCriteria password
pure (passwordToString next_password, next_password)
Part 2
part2 : Password -> Eff (PartEff String) String
part2 password = do
info
"Second starting password: \{show password} -> \{passwordToString password}"
let next_password = findNextPassword passwordCriteria password
pure $ passwordToString next_password
References
Year 2015 Day 12
This day provides an introduction to our new
Parser.JSON
module, as well as the use of DList
s1
to collect values from an indexed type family into a single collection.
import Structures.Dependent.DList
import Parser
import Parser.JSON
Parsing
Parse a list of JSON values into a DList
.
JSONValue
's type constructor has the signature JSONType -> Type
, forming
what is known as an "Indexed Type Family".
Each type of JSON value has a separate type, e.g. a Bool has type
JSONValue TBool
, a String has type JSONValue TString
, etc. While these are
all separate types, they all share the JSONValue
component of the type
constructor, making them members of the same family.
Despite being members of the same type family, they still have different types,
so we can't just shove JSONValue
s of different types into, say, a
List JSONValue
, we need a collection with some amount of heterogeneity. While
a List (type : JSONType ** JSONValue type)
would work, that introduces
various ergonomic headaches, so instead we return a DList
1, a List
like
data structure specifically designed for collecting values from an indexed typed
family into a single collection.
The parsing logic is otherwise quite simple, we use the many
combinator to
turn the value
parser, which parses a single JSON value, into one that parses
a list of JSON values, and then use DList
's fromList
method to collect the
results into a DList
.
parseJsons : Has (Except String) fs => Has IO fs => (input : String)
-> Eff fs (types : List JSONType ** DList JSONType JSONValue types)
parseJsons input = do
result <- runFirstIO (many value) input
case result of
Left err => throw $ show err
Right result => pure $ fromList result
Solving
A reducer for DList.dFoldL
that sums all the numbers within the contained JSON
Value.
The outer function is meant to be called on a top level object, using
DList.dFoldL
on a DList
of JSONValue
s, where as the inner function
directly reduces a JSONValue
using JSON.dFoldL
.
sumNumbers : Double -> (type : JSONType) -> (value : JSONValue type) -> Double
sumNumbers dbl type value = dFoldL sumNumbers' dbl value
where
sumNumbers' : Double -> (type : JSONType) -> (value : JSONValue type) -> Double
sumNumbers' dbl TNumber (VNumber d) = dbl + d
sumNumbers' dbl _ value = dbl
Filter out objects containing a "red" key
noRed : (type : JSONType) -> (value : JSONValue type) -> Bool
noRed TObject value =
let (types ** vals) = getValues value
in case dFind (\t, v =>
case t of
TString => v == (VString "red")
_ => False
) vals of
Nothing => True
Just _ => False
noRed _ value = True
sumNumbersNoRed :
Double -> (type : JSONType) -> (value : JSONValue type) -> Double
sumNumbersNoRed dbl type value =
case dFilter noRed value of
Nothing => dbl
Just value => sumNumbers dbl type value
Part Functions
Part 1
Parse our JSONs, then fold our sumNumbers
reducer over them.
part1 : Eff (PartEff String)
(Double, (types : List JSONType ** DList JSONType JSONValue types))
part1 = do
input <- askAt "input"
(types ** values) <- parseJsons input
let result = dFoldL sumNumbers 0.0 values
pure (result, (types ** values))
Part 2
part2 : (types : List JSONType ** DList JSONType JSONValue types)
-> Eff (PartEff String) Double
part2 (types ** values) = do
let result = dFoldL sumNumbersNoRed 0.0 values
pure result
References
Year 2015 Day 13
This day exhibits a naive, Vect
based implementation of a ring buffer, as well
as our first introduction to parameters
blocks.
import Data.String
import Data.List1
import Data.List.Lazy
import Data.Vect
import Data.Maybe
import Data.SortedMap.Dependent
import Decidable.Equality
import Util
%default total
Parsing and Data Structures
Name : Type
Name = String
Happiness : Type
Happiness = Integer
Describe a change in happiness from a change in seating arrangement as data structure, indexed by the name of the individual whose happiness it describes, and provide some projections.
data Change : (changee : Name) -> Type where
NextTo : (changee : Name) -> (other : Name) -> (amount : Happiness)
-> Change (changee)
(.changee) : Change changee -> Name
(.changee) (NextTo changee _ _) = changee
(.other) : Change changee -> Name
(.other) (NextTo _ other _) = other
(.amount) : Change changee -> Happiness
(.amount) (NextTo _ _ amount) = amount
Collect the list of changes provided as input into a structure that encodes our assumptions at the type level.
The changes are stored in a in a dependent map, with the name of the individual as the key, and lists of potential changes to their happiness as the values.
This problem is a bit nicer to express in terms of a collection of known size,
and we don't want to be constantly converting the keys list to a Vect
, so we
instead store it in Changes
as a Vect
. We don't want to accidentally store
the wrong thing here, so we store an auto-implicit proof of equality,
keys_prf
, proving that the names
list is exactly the list of keys in
change_map
converted to a Vect with fromList
.
It will also make things a bit nicer if we can assume that our names
list is
non-empty, after all it really doesn't make sense to talk about seating
arrangements at a table with 0 people at it, so we store an auto-implict
nonempty
proof establishing that the length of change_map
's keys list, and
thus names
, is at least 1.
record Changes where
constructor MkChanges
change_map : SortedDMap Name (\n => List (Change n))
names : Vect (length (keys change_map)) Name
{auto keys_prf : names = fromList (keys change_map)}
{auto nonempty : IsSucc (length (keys change_map))}
Our usual pattern-matching based parsing of one element of the input, returning a dependent pair of the name of the individual this record describes, and the change described by that record.
parseChange : Has (Except String) fs =>
String -> Eff fs (name ** Change name)
parseChange str = do
changee ::: [_, direction, amount, _, _, _, _, _, _, other]
<- pure $ split (== ' ') str
| _ => throw "Invalid input string \{str}"
amount <- note "Invalid amount \{amount} in \{str}" $ parseInteger amount
amount : Happiness <-
case direction of
"gain" => pure amount
"lose" => pure $ negate amount
x => throw "Invalid direction \{x} in \{str}"
let other = pack . filter (/= '.') . unpack $ other
pure (_ ** (changee `NextTo` other) amount)
Parse the entire list of changes in the input, collecting them into a dependent
map as we go along, and performing the checks needed for Idris to be satisfied
that the conditions encoded by the auto-implict proofs in Changes
are met.
parseChanges : Has (Except String) fs =>
List String -> (seed : SortedDMap Name (\n => List (Change n)))
-> Eff fs Changes
parseChanges strs seed = do
changes <- traverse parseChange strs
let change_map = insertChanges changes seed
case isItSucc (length (keys change_map)) of
Yes prf => pure $ MkChanges change_map (fromList (keys change_map))
No contra => throw "Empty table, not very interesting"
where
insertChanges : List (name ** Change name)
-> (acc : SortedDMap Name (\n => List (Change n)))
-> SortedDMap Name (\n => List (Change n))
insertChanges [] acc = acc
insertChanges ((name ** change) :: xs) acc =
case lookupPrecise name acc of
Nothing => insertChanges xs (insert name [change] acc)
Just ys => insertChanges xs (insert name (change :: ys) acc)
Solver functions
All of these functions are about to take the same first argument,
(cs : Changes)
. This is a really common occurrence, especially when dealing
with dependent proof types, so Idris has syntax sugar to avoid repeating your
self in theses situations, parameters
blocks1.
A parameters
block adds the provided arguments to the start of every top level
signature contained within it, in this case, making the first argument of all of
these functions have type (cs : Changes)
. The arguments to the parameters
blocks are also added to the front of the arguments list, using the names
provided in the signature.
parameters
blocks also provide another fun bit of functionality that makes
code within them more concise, within a parameters
block, the parameters are
implicitly passed as arguments to calls to functions in the same block.
parameters (cs : Changes)
Calculate the happiness change for a given person in a seating arrangement, use
finS
and unfinS
to get the indexes of the parties seated to either side of
us, and look them up in our map, adding the amount of change described by them
together.
Notice how cs
appears neither in the arguments list, nor the type signature,
yet we can still refer to it as if it was included at the start of both.
happinessFor :
(arrangement : Vect (length (keys cs.change_map)) Name)
-> (idx : Fin (length (keys cs.change_map)))
-> Happiness
happinessFor arrangement idx =
let name = idx `index` arrangement
in case name `lookupPrecise` cs.change_map of
Nothing => 0
Just changes =>
let name_right = (finS idx) `index` arrangement
change_right =
fromMaybe 0 . map (.amount) . find ((== name_right) . (.other)) $
changes
name_left = (unfinS idx) `index` arrangement
change_left =
fromMaybe 0 . map (.amount) . find ((== name_left) . (.other)) $
changes
in change_right + change_left
Calculate the overall happiness change for a given arrangement by mapping our
happinessFor
function over a list of all possible indexes to the arrangement
vect, and summing the results.
Notice how the cs
parameter is implicitly passed to happinessFor
, as we are
inside the same parameters
block as it.
happinessChange :
(arrangement : Vect (length (keys cs.change_map)) Name)
-> Happiness
happinessChange arrangement =
let idxes = List.allFins (length (keys cs.change_map))
changes = map (happinessFor arrangement) idxes
in sum changes
Find the arrangement with the maximum total change in happiness by mapping
happinessChange
over a list of all the possible permutations of our seed
arrangement described by names
, and using maxBy
to identify the largest
positive change in overall happiness.
maxHappiness : Has (Except String) fs =>
Eff fs (Happiness, Vect (length (keys cs.change_map)) Name)
maxHappiness =
let arrangements = permutations cs.names
changes = map happinessChange arrangements
pairs = zip changes arrangements
in case pairs of
[] => throw "No arrangements"
(x :: xs) => pure $ maxBy (compare `on` fst) x xs
Part Functions
Part 1
Parse our input and feed it into our maxHappiness
function.
Notice how, since we are outside the parameters
block, we have to provide the
cs
argument to maxHappiness
explicitly.
part1 : Eff (PartEff String) (Happiness, ())
part1 = do
input <- map lines $ askAt "input"
changes <- parseChanges input empty
(max, arrangement) <- maxHappiness changes
pure (max, ())
Part 2
Our implementation already replaces missing relationships with 0, so we can
cheese this by injecting ourself with an empty relationship list into the
change_map : SortedDMap Name (\n => (List n))
.
The overall Changes
data structure isn't easy to modify, and since our data
set is quite small here, we'll just inject this into parsing and reparse our
data.
part2 : () -> Eff (PartEff String) Happiness
part2 x = do
input <- map lines $ askAt "input"
let seed = insert "ME!!!!" [] empty
changes <- parseChanges input seed
(max, arrangement) <- maxHappiness changes
pure max
References
https://idris2.readthedocs.io/en/latest/tutorial/modules.html#parameterised-blocks-parameters-blocks
Year 2015 Day 14
This day provides us an introduction to streams, infinite, lazily generated, collections of data.
import Data.String
import Data.List1
import Data.Vect
import Data.Stream
import Data.Zippable
import Decidable.Equality
import Util
Parsing And Datastructures
Collect the aspects defining a reindeer into a record
record Reindeer where
constructor MkReindeer
name : String
speed : Nat
duration, rest : Nat
This time around, since the lines describing a reindeer contain a lot of cruft,
we'll handle the parsing by converting the input, after splitting it on space
characters, to a Vect
, and indexing into that Vect
.
parseReindeer : Has (Except String) fs =>
(input : String) -> Eff fs Reindeer
parseReindeer input = do
parts <- note "Input has wrong size: \{input}" $
toVect 15 . forget . split (== ' ') . trim $ input
let name = index 0 parts
speed <- note "" $
parsePositive $ index 3 parts
duration <- note "" $
parsePositive $ index 6 parts
rest <- note "" $
parsePositive $ index 13 parts
pure $ MkReindeer name speed duration rest
Solver Functions
A stream is an infinite analog of a list, storing an infinite collection of (lazily generated) values.
Streams are defined like:
data Stream' : Type -> Type where
(::) : a -> Inf (Stream' a) -> Stream' a
Streams are a member of a family of concepts analogous to iterators in imperative languages, the different flavors of colist.
Colists are the codata duals of lists, we'll dig more into to this later, but to
provide a high level summary, where data is defined by how it is constructed,
codata is defined by how it is destructed. While a list is defined by how you
can cons an element x
onto a list xs
to produce a new list x :: xs
, a
colist is defined by how you can break down a colist x :: xs
into a head x
and a tail xs
.
Streams are a particular type of colist that has no empty case, breaking down a
Stream
will always produce an element and another stream, resulting in streams
always being infinite in length.
Destructing a Stream
by pattern matching is semantically equivalent to calling
the next
method on an iterator in a language like rust, it produces the
element at the head of a stream, and a new stream producing future elements.
We will model are reindeer's future history of locations as a stream, with each
element being the position at the time given by the index into the stream,
generating it with a pair of mutually recursive functions. The run
function
adds the speed to current position to produce the next one, and the rest
function doesn't modify the position whill still consuming a time step.
distances : Reindeer -> Stream Nat
distances x = run x x.duration 0
where
run : (deer : Reindeer) -> (left : Nat) -> (position : Nat)
-> Stream Nat
rest : (deer : Reindeer) -> (left : Nat) -> (position : Nat)
-> Stream Nat
run deer 0 position = rest deer deer.rest position
run deer (S k) position = position :: run deer k (position + deer.speed)
rest deer 0 position = run deer deer.duration position
rest deer (S k) position = position :: rest deer k position
Carry an accumulator containing the scores for each reindeer down the stream, at each position, granting one point to each reindeer at the leader position after the end of the second.
leaderScoring : {n : _} -> Vect (S n) (Stream Nat) -> Stream (Vect (S n) Nat)
leaderScoring xs = leaderScoring' (replicate _ 0) xs
where
leaderScoring' : {n : _} -> (acc : Vect (S n) Nat) -> Vect (S n) (Stream Nat)
-> Stream (Vect (S n) Nat)
leaderScoring' acc xs =
let positions = map (head . tail) xs
leader_pos = maxBy compare 0 positions
points : Vect _ Nat =
map (\x => if x == leader_pos then 1 else 0) positions
in acc :: leaderScoring' (zipWith (+) acc points) (map tail xs)
Part Functions
Part 1
Parse the input, generate the position Stream
s for each reindeer, then index
the finish position in each stream.
part1 : Eff (PartEff String) (Nat, ())
part1 = do
lines <- map lines $ askAt "input"
reindeer <- traverse parseReindeer lines
debug $ show reindeer
let dists = map distances reindeer
let dists_end = map (index 2503) dists
let max = maxBy compare 0 dists_end
pure (max, ())
Parse the input into a vect, and make sure it is not empty, then generate the
stream with the leaderScoring
function and index into it.
part2 : () -> Eff (PartEff String) Nat
part2 x = do
lines <- map lines $ askAt "input"
let (len ** lines) = listToVect lines
case len of
0 => throw "No reindeer :("
(S k) => do
reindeer <- traverse parseReindeer lines
let dists = leaderScoring $ map distances reindeer
let dists_end = index 2503 dists
pure $ maxBy compare 0 dists_end