Math 252
Discrete Mathematics for Computer Science
Spring 2022


[Webwork] [Teams] [From Class] [Calendar] [Test Info] [Homework]

Worksheets/Handouts

I’ll print and bring worksheets to class, but if you lose one or want to have it in electronic format from some reason, you can get it here as well.

  • Group A1: Aayam Shrestha, Jay Carlson, Alexandra Prasser, Jacob Westra
  • Group A2: Matthew Walstra, Hyun Sim, Yong Cho, Germaine Hounakey
  • Group A3: Caleb Mitchell, Samuel Hoogewind, Oghenesuvwe Ogedegbe, Seongchan Seongchan
  • Group A4: Luz Contreras Guzman, Nathan Husler, Eli Lewis, Adam Zentner
  • Group A5: Antonio Sepulveda, Ael Lee, Tyler Voss, Haim Hong
  • Group A6: Gunju Yoo, Matthew Van Harn, Heonjae Kwon, Justin Michele
  • Group A7: Eric Klomp, Fitsum Maru, Joshua DeWindt
  • Group B1: Braden Lint, Seongrim Choi, Anwesha Pradhananga
  • Group B2: Logan Humphrey, Jason Chew, Chang Liu
  • Group B3: Caleb Hurshman, Benedicto Elpidius, Jr., Joshua Wilson
  • Group B4: Aishwarya Joshi, Chong Xiang Ju, Helen Sharp
  • Group A1: Seongchan Seongchan, Germaine Hounakey, Samuel Hoogewind, Jacob Westra
  • Group A2: Eric Klomp, Gunju Yoo, Adam Zentner, Eli Lewis
  • Group A3: Ryan Lee, Matthew Van Harn, Haim Hong, Clive Amoh
  • Group A4: Ael Lee, Caleb Mitchell, Tyler Voss, Heonjae Kwon
  • Group A5: Matthew Walstra, Justin Michele, Nathan Husler, Antonio Sepulveda
  • Group A6: Hyun Sim, Luz Contreras Guzman, Alexandra Prasser
  • Group A7: Yong Cho, Oghenesuvwe Ogedegbe, Aayam Shrestha
  • Group A8: Jay Carlson, Fitsum Maru, Joshua DeWindt
  • Group B1: Jason Chew, Caleb Hurshman, Chong Xiang Ju
  • Group B2: Chang Liu, Seongrim Choi, Logan Humphrey
  • Group B3: Braden Lint, Anwesha Pradhananga, Aishwarya Joshi
  • Group B4: Helen Sharp, Benedicto Elpidius, Jr., Joshua Wilson
  • Group A1: Clive Amoh, Ryan Lee, Adam Zentner, Fitsum Maru
  • Group A2: Jacob Westra, Samuel Hoogewind, Hyun Sim, Justin Michele
  • Group A3: Luz Contreras Guzman, Aayam Shrestha, Yong Cho, Seongchan Seongchan
  • Group A4: Matthew Van Harn, Jay Carlson, Heonjae Kwon, Antonio Sepulveda
  • Group A5: Joshua DeWindt, Tyler Voss, Gunju Yoo, Ael Lee
  • Group A6: Nathan Husler, Matthew Walstra, Eric Klomp
  • Group A7: Eli Lewis, Haim Hong, Oghenesuvwe Ogedegbe
  • Group A8: Alexandra Prasser, Caleb Mitchell, Germaine Hounakey
  • Group B1: Anwesha Pradhananga, Chang Liu, Seongrim Choi
  • Group B2: Logan Humphrey, Joshua Wilson, Caleb Hurshman
  • Group B3: Helen Sharp, Benedicto Elpidius, Jr., Braden Lint
  • Group B4: Chong Xiang Ju, Aishwarya Joshi, Jason Chew
  • Group A1: Fitsum Maru, Tyler Voss, Clive Amoh, Alexandra Prasser
  • Group A2: Samuel Hoogewind, Gunju Yoo, Ryan Lee, Caleb Mitchell
  • Group A3: Joshua DeWindt, Heonjae Kwon, Yong Cho, Matthew Walstra
  • Group A4: Aayam Shrestha, Justin Michele, Nathan Husler, Eli Lewis
  • Group A5: Eric Klomp, Ael Lee, Seongchan Seongchan, Luz Contreras Guzman
  • Group A6: Antonio Sepulveda, Matthew Van Harn, Hyun Sim
  • Group A7: Jacob Westra, Adam Zentner, Oghenesuvwe Ogedegbe
  • Group A8: Germaine Hounakey, Haim Hong, Jay Carlson
  • Group B1: Helen Sharp, Aishwarya Joshi, Caleb Hurshman
  • Group B2: Joshua Wilson, Braden Lint, Chong Xiang Ju
  • Group B3: Anwesha Pradhananga, Seongrim Choi, Chang Liu
  • Group B4: Logan Humphrey, Benedicto Elpidius, Jr., Jason Chew

Graphs

date Topic link(s)
Mon 1/10 Intro to Graphs PDF
Wed 1/12 More Graphs PDF
Fri 1/14 Graph Representations PDF
Mon 1/17 Graph Isomorphism PDF
Wed 1/19 Paths in Graphs PDF
Fri 1/21 Paths in Graphs PDF
Mon 1/24 Bipartite Graphs PDF
Wed 1/26 Planar, Non-planar Graphs PDF

Counting and Probability

date Topic link(s)
Mon 1/31 Some Counting Problems PDF
Wed 2/02 Pigeon Hole Principle PDF
Fri 2/04 Division Principle PDF
Mon 2/07 Donuts PDF

Groups

We will often work in groups. Periodically we will shuffle things up so you get to work with new people.

  • Group A1: Caleb Mitchell, Matthew Van Harn, Fitsum Maru, Heonjae Kwon
  • Group A2: Joshua DeWindt, Antonio Sepulveda, Eli Lewis, Oghenesuvwe Ogedegbe
  • Group A3: Justin Michele, Clive Amoh, Ael Lee, Samuel Hoogewind
  • Group A4: Germaine Hounakey, Yong Cho, Luz Contreras Guzman, Ryan Lee
  • Group A5: Eric Klomp, Alexandra Prasser, Nathan Husler, Haim Hong
  • Group A6: Hyun Sim, Seongchan Seongchan, Aayam Shrestha
  • Group A7: Jay Carlson, Adam Zentner, Jacob Westra
  • Group A8: Gunju Yoo, Matthew Walstra, Tyler Voss
  • Group B1: Chang Liu, Anwesha Pradhananga, Benedicto Elpidius, Jr.
  • Group B2: Jason Chew, Joshua Wilson, Helen Sharp
  • Group B3: Aishwarya Joshi, Caleb Hurshman, Seongrim Choi
  • Group B4: Logan Humphrey, Braden Lint, Chong Xiang Ju
  • Group A1: Jay Carlson, Matthew Van Harn, Germaine Hounakey, Samuel Hoogewind
  • Group A2: Haim Hong, Oghenesuvwe Ogedegbe, Jacob Westra, Alexandra Prasser
  • Group A3: Ryan Lee, Eric Klomp, Hyun Sim, Eli Lewis
  • Group A4: Tyler Voss, Antonio Sepulveda, Gunju Yoo, Heonjae Kwon
  • Group A5: Yong Cho, Joshua DeWindt, Justin Michele, Nathan Husler
  • Group A6: Clive Amoh, Aayam Shrestha, Adam Zentner
  • Group A7: Matthew Walstra, Caleb Mitchell, Fitsum Maru
  • Group A8: Ael Lee, Luz Contreras Guzman, Seongchan Seongchan
  • Group B1: Aishwarya Joshi, Logan Humphrey, Caleb Hurshman
  • Group B2: Chong Xiang Ju, Helen Sharp, Jason Chew
  • Group B3: Seongrim Choi, Chang Liu, Benedicto Elpidius, Jr.
  • Group B4: Anwesha Pradhananga, Joshua Wilson, Braden Lint
  • Group A1: Haim Hong, Joshua DeWindt, Eli Lewis, Antonio Sepulveda
  • Group A2: Justin Michele, Hyun Sim, Eric Klomp, Clive Amoh
  • Group A3: Jacob Westra, Alexandra Prasser, Fitsum Maru, Adam Zentner
  • Group A4: Germaine Hounakey, Gunju Yoo, Nathan Husler, Matthew Walstra
  • Group A5: Ael Lee, Jay Carlson, Heonjae Kwon, Oghenesuvwe Ogedegbe
  • Group A6: Yong Cho, Aayam Shrestha, Samuel Hoogewind
  • Group A7: Luz Contreras Guzman, Ryan Lee, Tyler Voss
  • Group A8: Caleb Mitchell, Matthew Van Harn, Seongchan Seongchan
  • Group B1: Caleb Hurshman, Joshua Wilson, Braden Lint
  • Group B2: Jason Chew, Anwesha Pradhananga, Chong Xiang Ju
  • Group B3: Helen Sharp, Benedicto Elpidius, Jr., Aishwarya Joshi
  • Group B4: Logan Humphrey, Chang Liu, Seongrim Choi

Counting and Probability

date Topic link(s)
Mon 1/31 Some Counting Problems PDF
Wed 2/02 Pigeon Hole Principle PDF
Fri 2/04 Division Principle PDF
Mon 2/07 Donuts PDF
Fri 2/11 Probability PDF
Mon 2/11 Probability (cont’d) PDF
Wed 2/16 Conditional Probability 1 PDF
Fri 2/18 Conditional Probability 2 PDF
Mon 2/21 Random Variables PDF
Wed 2/23 Expected Value PDF
Fri 2/25 More Expected Value PDF
Mon 3/7 Probability Wrap-Up PDF

Langauges, Grammars, Machines

date Topic link(s)
Wed 3/9 Grammars PDF
Fri 3/11 Types of Grammars PDF

Groups

We will often work in groups. Periodically we will shuffle things up so you get to work with new people.

  • Group A1: Antonio Sepulveda, Yong Cho, Samuel Hoogewind, Tyler Voss
  • Group A2: Oghenesuvwe Ogedegbe, Hyun Sim, Heonjae Kwon, Ael Lee
  • Group A3: Alexandra Prasser, Matthew Walstra, Seongchan Seongchan, Nathan Husler
  • Group A4: Fitsum Maru, Gunju Yoo, Joshua DeWindt, Clive Amoh
  • Group A5: Eli Lewis, Adam Zentner, Jacob Westra, Aayam Shrestha
  • Group A6: Jay Carlson, Ryan Lee, Caleb Mitchell
  • Group A7: Haim Hong, Germaine Hounakey, Matthew Van Harn
  • Group A8: Eric Klomp, Justin Michele, Luz Contreras Guzman
  • Group B1: Chong Xiang Ju, Logan Humphrey, Aishwarya Joshi
  • Group B2: Anwesha Pradhananga, Jason Chew, Joshua Wilson
  • Group B3: Seongrim Choi, Chang Liu, Benedicto Elpidius, Jr.
  • Group B4: Caleb Hurshman, Helen Sharp, Braden Lint
  • Group A1: Yong Cho, Seongchan Seongchan, Aayam Shrestha, Oghenesuvwe Ogedegbe
  • Group A2: Alexandra Prasser, Antonio Sepulveda, Hyun Sim, Jacob Westra
  • Group A3: Joshua DeWindt, Justin Michele, Haim Hong, Ael Lee
  • Group A4: Luz Contreras Guzman, Ryan Lee, Adam Zentner, Tyler Voss
  • Group A5: Eli Lewis, Jay Carlson, Heonjae Kwon, Matthew Van Harn
  • Group A6: Eric Klomp, Germaine Hounakey, Fitsum Maru
  • Group A7: Gunju Yoo, Samuel Hoogewind, Clive Amoh
  • Group A8: Caleb Mitchell, Matthew Walstra, Nathan Husler
  • Group B1: Braden Lint, Joshua Wilson, Caleb Hurshman
  • Group B2: Chong Xiang Ju, Anwesha Pradhananga, Seongrim Choi
  • Group B3: Helen Sharp, Chang Liu, Logan Humphrey
  • Group B4: Jason Chew, Benedicto Elpidius, Jr., Aishwarya Joshi
  • Group A1: Aayam Shrestha, Eric Klomp, Joshua DeWindt, Eli Lewis
  • Group A2: Heonjae Kwon, Gunju Yoo, Seongchan Seongchan, Jacob Westra
  • Group A3: Hyun Sim, Yong Cho, Fitsum Maru, Jay Carlson
  • Group A4: Matthew Walstra, Alexandra Prasser, Justin Michele, Samuel Hoogewind
  • Group A5: Haim Hong, Antonio Sepulveda, Tyler Voss, Luz Contreras Guzman
  • Group A6: Germaine Hounakey, Clive Amoh, Oghenesuvwe Ogedegbe
  • Group A7: Ael Lee, Caleb Mitchell, Matthew Van Harn
  • Group A8: Nathan Husler, Ryan Lee, Adam Zentner
  • Group B1: Caleb Hurshman, Joshua Wilson, Seongrim Choi
  • Group B2: Logan Humphrey, Braden Lint, Jason Chew
  • Group B3: Chang Liu, Chong Xiang Ju, Benedicto Elpidius, Jr.
  • Group B4: Helen Sharp, Aishwarya Joshi, Anwesha Pradhananga

Choose your own groups!

Langauges, Grammars, Machines

date Topic link(s)
Wed 3/9 Grammars PDF
Fri 3/11 Types of Grammars PDF
Wed 3/16 Automata PDF
Fri 3/18 Automata PDF
Fri 3/18 Nondeterministic Automata PDF
Fri 3/25 DFA vs NFA PDF
Fri 3/25 Regular Expressions and Languages PDF
Mon 3/28 Regular Expression -> NFA PDF
Wed 3/30 Regular Grammars and Automata PDF
04/04 DFA to regex PDF, notes
04/06 Turing Machines PDF, simulator
04/11 A language that is not regular PDF
05/07 Regular expressions in python slides

Note: You don’t have to turn in any homework for Python regular expressions, but you should work through some of these games and tutorials to hone your skills. This topic will be included on the final exam.

  • The regular expression game

    • Construct regular expressions to match the indicated portions of a page of text.
    • These tasks seem to mimic the kinds of things you might really do (finding authors in a bibilography, finding MAC addresses in some text, etc.)
  • regex 101

  • Regex crossword game

    • Fill in the crossword with text that matches regex clues.
    • This forces you to work backwards, the regular expression are given
  • Regular expression golf

    • Find a short regex that matches everything in one column but doesn’t match any portion of anything in the other column.
    • Some of the hard ones here get pretty hard and not very “natural”
    • Least useful of the three for learning regex.
  • No homework for today’s topic. Instead I have some online regular expression games for you to play. (See above.)

For each grammar, look at each rule to see if it follows the restrictions of each type of grammar.

Example: \(G_1\)

  • Not context sensitive because the rule \(AB \to b\) doesn’t work. (One of \(A\) or \(B\) would need to be part of the context and so reappear on the right side.)
  • Not context free or regular because there are rules with more than a single non-terminal on the left side.
  • So \(G_1\) is type 0, but not any of the other types.

Example: You can’t have \(A \to \lambda\) and \(A\) on the right side of rules in a context sensitive grammar. So you will need to remove one or the other to make this type 1. (There are other things that need doing as well.)

There is one case where the answer is no. Can you find it?

Despite the answer to Exercise 2, this is true. Do you see how to get around the issue in Exercise 2 when we talk about languages rather than grammars?

Using \(S\) as the start nonterminal and \(a\) and \(b\) as a terminals, the following grammar is type 2 but not type 1 (because we aren’t allowed to have \(S\) on the right side of a rule if we also have \(S\to\lambda\)):

  • \(S \to \lambda\)
  • \(S \to aS\)
  • \(S \to ab\)

But the language generated by the grammar is a type 1 language because we can use a different grammar to generate it:

  • \(S \to \lambda\)
  • \(S \to A\)
  • \(A \to aA\)
  • \(A \to ab\)

Basically we just translate \(S\) to \(A\). This avoids the problem of having \(S\) on the right side of a production rule. This same trick can be applied to any type 2 grammar.

More generally, to tell whether a grammar is of a certian type, we just need to check the rules. But to tell whether a language is of a certain type, we need to ask whether there is any grammar of the appropriate type that can generate it.

Note: This language is actually also regular. Here is a regular grammar that generates the same language.

  • \(S \to \lambda\)
  • \(S \to aS\)
  • \(S \to aB\)
  • \(B \to b\)

Answer: There is no place for context in the notation. The left side is always just a single non-terminal.

Example: Grammar 2

<S> ::= a<A> | b
<A> ::= aa

Here are example machines for a few of these. Note: I strongly advise using meaningful state names.

I’m doing these with “ASCII art”, so I won’t always include all of the transitions. Be sure to read the description that goes along. You might want to draw out the full machine for yourself.

  1. The set of bitstrings that begin with two 0’s

    Keep track of what has been seen so far. Here’s the core of the DFA:

    -> [ ]  -- 0 --> [ 0 ] -- 0 --> [[ 00 ]] 

    Now add:

    • a self-loop on the accepting state (for both 0 and 1)
    • all other transitions go to a non-accepting ‘dead’ state.

    The only way to get to the accepting state is to have the first two symbols be 0.

  2. The set of bitstrings that contain exactly two 0’s

    The core of this is similar to part a, but now we don’t need to have the 0’s consecutively, and we are not allowed to hae extra 0’s. Again, start with this:

    -> [ no zeros ]  -- 0 --> [ one 0 ] -- 0 --> [[ two 0s ]] -- 0 --> [more]

    Now add some more transitions:

    • When reading a 1, always loop where you are.
    • When reading a 0 at the [more] state, loop.
  3. The set of bitstrings that contain at least two 0’s

    The only change to the previous machine is that if we ever get to [[ two 0’s]], then we stay there (loop on both 0 and 1). So we can delete the [more] state.

    -> [ no zeros ]  -- 0 --> [ one 0 ] -- 0 --> [[ two 0s ]] -- 0 --> [more]
  4. The set of bitstrings that contain two consecutive 0’s (anywhere in the string)

    Again, very similar. But now, whenever we see a 1 before we see two consecutive 0’s, we “start over”. Once we see two consecutive 0’s, we know the string is accepted.

    Here’s the core:

    -> [ no zeros ]  -- 0 --> [ one 0 ] -- 0 --> [[ good! ]] 

    Now add

    • self-loop on [[ good !]] for both 0 and 1.
    • transitions back to the start state if you read a 1 while in the other two states.
  5. The set of bitstrings that do not contain two consecutive 0’s anywhere

    We need to remember the same information as in the previous example, but we need to flip the result.

    Here’s the core:

    -> [[ no zeros ]]  -- 0 --> [[ one 0 ]] -- 0 --> [ bad! ] 

    Now add:

    • self-loop at [ bad! ] for both 0 and 1.
    • transitions back to start from the other two states when you read a 1.
  6. The set of bitstrings that end with two 0’s

    This time, we need to remember the last two bits we have seen. So the core states are 00, 01, 10, and 11. We’ll need to add some states to get started: S, 0, nad 1. These states code what the last 2 bits were. Now see if you can figure out how to add the transitions and determine which states are accepting.

Note: Rosen likes to label his states \(s_0\), \(s_1\), etc. I find that clunky and I suggest that you not bother with subscripts when creating your own machines. You can use numbers, or letters, or even short words or phrases that identify what a state represents. The worksheet will show you both styles.

Hint: Think about the pebbling algorithm. How do the pebbles relate to states in a DFA?

Make sure you give this a good attempt before comparing your work to this solution.

It isn’t important where you put the states, but it is important that the edges and labels match.

In the worst case there is a state in the DFA for every possible set of states in the NFA. If the NFA has \(n\) states, how many subsets of states are there? Hint: to build a subset, you need to decide for each state whether it is in our out of the subset. Now you use your counting rules.

What strings belong to \(\{0, 01\}^2\)? Let’s add a vertical bar to show how the strings are formed. We need to have two pieces, and each piece must be either 0 or 01, so

  • \(0|0 \to 00\)
  • \(0|01 \to 001\)
  • \(01|0 \to 010\)
  • \(01|01 \to 0101\)

Which of these strings belong to \(\{0, 01\}^*\)?

  1. \(01001\). Yes: \(01|0|01\)
  2. \(10110\). No: Cannot start with a \(1\).
  3. \(00010\). Yes: \(0|0|01|0\).

Which of these strings belong to \(\{101, 111, 11\}^*\)?

  1. \(1010111\). No. You can start 101, but then you are stuck.
  2. \(1011011\). No. You have to start 101, then 101 again, but then there is only one 1 left.
  3. \(1110111\). Yes. \(11|101|11\).
  4. \(11110111\). Yes. \(111|101|11\)

\(1\) is a symbol or a string with one symbol. \(\boldsymbol{1}\) represents a language with one string in it: \(\{1\}\).

\(\lambda\) is the empty string. \(\boldsymbol{\lambda}\) represents a language containing only the empty string: \(\{\lambda\}\).

\(\emptyset\) is the empty set. \(\boldsymbol{\emptyset}\) represents the empty language, but languages are sets, so the empty language is the empty set.

Describe in words the languages represented by each of these regular expressions:

  1. \(\boldsymbol{10^*}\) is the set of all strings that begin with a 1 but don’t have any other 1’s.
  2. \(\boldsymbol{(10)^*}\) is the set of all strings with even length that alternate 1’s and 0’s, starting with a 1. (0 is even, so the empty string is included in this description.)
  3. \(\boldsymbol{0 \cup 01} = \{0, 01\}\), a language with only two strings, \(0\) and \(01\).
  4. \(\boldsymbol{0^* \cup 01}\) consists of the string 01 and any string made up only of 0’s (including the empty string).
  5. \(\boldsymbol{0 (0 \cup 1)^*}\) is the language consisting of strings that start with \(0\).
  6. \(\boldsymbol{(0 \cup 01)^*}\) is the language consisting of all strings that do not start with a 1 and do not have two adjacent \(1\)’s. (The empty string is included in this.)

All of the languages in this problem can be recognized by either a DFA or an NFA. Do you see how to design them?

This is will our topic for next time.

Start state: \(S\); Accepting states: \(S\) and \(X\)

state 0 1
\(S\) \(\{X\}\) \(\{A\}\)
\(A\) \(\{A\}\) \(\{A\}\)
\(X\) \(\{\}\) \(\{X\}\)

It’s a good idea to check your work on a few strings to make sure the grammar and the NFA agree. You might try some of these:

  • \(\lambda\)
  • \(0\)
  • \(1\)
  • \(01\)
  • \(101\)

Start state: \(S\); Accepting state: \(X\)

state 0 1
\(S\) \(\{X\}\) \(\{A\}\)
\(A\) \(\{B\}\) \(\{A, X\}\)
\(B\) \(\{A, X\}\) \(\{B\}\)
\(X\) \(\{\}\) \(\{\}\)

It’s a good idea to check your work on a few strings to make sure the grammar and the NFA agree. You might try some of these:

  • \(\lambda\)
  • \(110\)
  • \(011\)
  • \(11001\)
  • \(11100\)

Three types of rules \(S\to\lambda\), \(A \to x B\), \(A \to x\) where \(S\) is the start state, capitals are non-terminals and lower case are terminals.

States of \(N\): one for each non-terminal in the grammar and one more (let’s call is \(X\) for extra).

Start state = start symbol

\(X\) is an accepting state; the only other state that might be accepting is start state.

Convert rules to transitions:

  • \(S \to \lambda\) makes the start state accepting.
  • \(A \to xB\) adds a transition \(A \stackrel{x}{\to} B\). (This is a self-loop if \(A = B\).)
  • \(A \to x\) adds a transition \(A \stackrel{x}\to X\).
  • Note: there will be not transitions out of state \(X\).
  • \(S_0 \to \lambda\)
  • \(S_0 \to 0S_1\)
  • \(S_0 \to 1S_2\)
  • \(S_0 \to 1\)
  • \(S_1 \to 0S_1\)
  • \(S_1 \to 1S_2\)
  • \(S_1 \to 1\)
  • \(S_2 \to 0S_1\)
  • \(S_2 \to 1S_2\)
  • \(S_2 \to 1\)

\(S\) is new lonely start state.

  • \(S \to 0S_0\)
  • \(S \to 1S_1\)
  • \(S \to 1\)
  • \(S_0 \to 0S_0\)
  • \(S_0 \to 1S_1\)
  • \(S_0 \to 1\)
  • \(S_1 \to 0S_2\)
  • \(S_1 \to 1S_1\)
  • \(S_1 \to 1\)
  • \(S_2 \to 0S_2\)
  • \(S_2 \to 1S_2\)

\(S\) is new lonely start state.

  • \(S \to \lambda\)
  • \(S \to 1S_1\)
  • \(S \to 1\)
  • \(S_0 \to 1S_1\)
  • \(S_0 \to 1\)
  • \(S_0 \to 0S_0\)
  • \(S_1 \to 0S_2\)
  • \(S_2 \to 0S_0\)
  • \(S_2 \to 1S_0\)
  • \(S_2 \to 0\)
  • \(S_2 \to 1\)
  • terminals = input alphabet
  • non-terminals = states (after adding lonely start state)
  • start symbol = lonely start state
  • Add \(S \to \lambda\) if start state is accepting.
  • Add \(A \to xB\) if \(A \stackrel{x}{\to} B\) is a transition.
  • Add \(A \to x\) if \(A \stackrel{x}{\to} B\) and \(B\) is accepting.
  • Lonely start state avoids having rules with start state on the right side (which is not allowed for regular grammars).

The algorithm works just the same if you start from an NFA or a GNFA.

As a side note, this implies that GNFAs are equivalent to NFAs. In particular, \(\lambda\)-NFAs (NFAs that allow transitions labeled \(\lambda\) that can be used without processing any input symbols) are equivalent. So our diagram looks like this

\[ \Large \begin{array}{ccccccc} \mathrm{DFA} & \longleftrightarrow & \mathrm{NFA} & \longrightarrow & \lambda\mathrm{-NFA} & \longrightarrow & \mathrm{GNFA} \\ && \downarrow \uparrow & \nwarrow & & \swarrow \\ & & \mathrm{regular}& & \mathrm{regular} \\ & & \mathrm{grammar}& & \mathrm{expression} \end{array} \] Some books and online resources will demonstrate the conversion

\[ \mbox{regular expression} \longrightarrow \lambda\mathrm{-NFA} \] since this is slightly easier than \[ \mbox{regular expression} \longrightarrow \mathrm{NFA} \] But then we would need an additional conversion \[ \lambda\mathrm{-NFA} \longrightarrow \mathrm{NFA} \;. \] This doable, but isn’t really any easier than building it into the the conversion from regular expressions to NFAs, the way we did it.

You should know how to perform all of the conversions illustrated in the diagram above.

Compare your descriptions to the descriptions below. Did you leave out anything important? Do the solutions below leave out anything important?

1a. regex \(\to\) NFA:

  • 6 parts:

    • 3 base cases – pretty easy
    • 3 that show union, concatenation and Kleene star – these are the “interesting” part
  • union: \(A \cup B\) where \(A = L(N_1)\) and \(B = L(N_2)\).

    • keep all states and transitions from \(N_1\) and \(N_2\)

    • add one new start state \(S\); \(S\) is final if either start state of \(N_1\) or start state of \(N_2\) is final

    • add some additional transitions

      • if \(S_1 \stackrel{x}{\to} A\), add \(S \stackrel{x}{\to} A\)
      • if \(S_2 \stackrel{x}{\to} B\), add \(S \stackrel{x}{\to} B\)
  • concatenation: \(AB\) where \(A = L(N_1)\) and \(B = L(N_2)\).

    • keep all states and transitions from \(N_1\) and \(N_2\)
    • start state from \(N_1\) is the start state
    • only final states from \(N_2\) are final
    • add some additional transitions
      • if \(A \stackrel{x}{\to} F\) (a final state in \(N_1\)), then add add \(A \stackrel{x}{\to} S_2\) (start state in \(N_2\))
  • Kleene star: \(A = L(N_1)\)

    • keep all states and transitions from \(N_1\)

    • add a new start state that is a final state (because \(\lambda \in A^*\))

    • add new transitions from new start state (\(S\))

      • if \(S_1 \stackrel{x}{\to} A\), add \(S \stackrel{x}{\to} A\) (includes case where \(A = S_1\))
    • add new transitions to old start state (\(S_1\))

      • if \(A \stackrel{x}{\to} F\), where F is final, add \(A \stackrel{x}{\to} S_1\)
      • If we are at the end of a string in \(L(N_1)\), this lets us “restart”

1b. DFA \(\to\) NFA

  • easy: a DFA “is” an NFA. (Why is “is” in quotes?)

1c. NFA \(\to\) DFA

  • DFA states are subsets of NFA states (think: magnets on white board)

  • A is final in DFA if A contains a final state of NFA

  • Remember that \(\{ \}\) may be needed as one of the states in the DFA

1d. DFA (or NFA) \(\to\) reg grammar

  • \(A \stackrel{x}{\to} B\) becomes

    • \(A \to xB\)

    • also \(A \to x\) if B is final state

  • If S is final in NFA/DFA, then add \(S \to \lambda\)

  • Number of rules = number of transitions + number of transitions to final states + 1 more if the start state is final

1e. reg grammar \(\to\) NFA

  • states: non-terminals of grammar + one additional state \(F\) (a final state)

    • Start symbol is also start state
    • If \(S \to \lambda\) is a rule, then \(S\) is a final state
  • production rules become transitions in NFA

    • \(A \to xB\) becomes \(A \stackrel{x}{\to} B\)
    • \(A \to x\) becomes \(A \stackrel{x}{\to} F\) (the added final state)

1f. DFA \(\to\) regular expression

  • Big idea: Make GNFAs with fewer and fewer states until we have a GNFA with two states (one the start state and the other the accepting state) and one transition between them, labeled with a regular expression.

  • If necessary, make two addjustments to the DFA before beginning

    • Make sure there is a lonely start state
    • Make sure there is a single accepting state with no exiting transitions.
    • \(\lambda\)-transitions are allowed for this step, so we might not have a DFA anymore after these adjustments.
  • Pick any internal state \(R\) (not the start or accepting state) and “rip it out”.

    • create or update transitions \(A \to B\) when there was \(A \to R \to B\).

In addition to the in class discussion, you might like to look at page 885 of the text.

The basic idea used to show that \(\{0^n1^n \mid n = 0, 1, 2, ... \}\) is not regular can be turned into a lemma that doesn’t mention DFAs at all.

Suppose \(L\) is a regular language. Then there must be a DFA \(M\) that recognizes \(L\). This machine has some number of states, let’s call that number \(N\).

Now suppose we have a long string \(x\) with more symbols than \(N\). (We will write this \(l(x) > N\). We can break our string into three parts:

  • before the loop: \(u\)
    • What can we say about the length of \(u\)?
  • in the loop: \(v\)
    • What can we say about the length of \(v\)?
  • after the loop: \(w\).

Explain why each of these strings is in \(L\) if and only if \(x \in L\).

  • \(uvw\)
  • \(uw\)
  • \(uvvw\)
  • \(uv^kw\)

Putting this all together gives us the Pumping Lemma:

If \(L\) is a regular language, then there is a number \(N\) such that for any string \(x\) with \(l(x) > N\), we can write \(x\) as

\[ x = uvw \]

such that

  • \(l(uv) \le N\),
  • \(l(v) > 0\),
  • \(x \in L\) if and only if \(uv^kw \in L\) for every natural number \(k\).

Important things to note:

  • The Pumping Lemma says something about long strings. There is some length beyond which the \(x = uvw\) splitting is guaranteed. It may or may not work for shorter strings.

  • You do not get to choose \(u\), \(v\), and \(w\). The Pumping Lemma tells you that there is a way to split up \(x\), not that every way of splitting up \(x\) will work.

  • The length constraints on \(u\) and \(v\) are often important in applications. In particular, \(l(uv) \le N\).

Let’s try using the Pumping Lemma to show that \(\{0^n1^n \mid n = 0, 1, \dots \}\) is not regular.

Suppose \(L\) were regular. Let \(N\) be the number given by the Pumping Lemma. Consider \(x = 0^N1^N\).

  • There is some way to split up \(x\) as \(x = uvw\) (because \(x\) is long enough), and
  • \(u\) and \(v\) consist entirely of 0’s, since \(l(uw) \le N\).

Now consider the string \(uw\). \(uw\) must be in \(L\), but \(uw\) has fewer 0’s than 1’s, so it can’t be in \(L\). This means our assumption that \(L\) was regular must be false.

Note 1: Anything you can prove using the Pumping Lemma, you can also prove by going back to the underlying argument about repeated states (pigeon hole principle), loops, etc. So you can think of the Pumping Lemma as an (optional) alternative way to think about these problems. I don’t care which approach you take, as long as you do it correctly. Any problem that says “by the Pumping Lemma” may also be done from the underlying argument.

Note 2: Using \(x = 0^N1^N\) is sufficient since we read more more state than we have read symbols. (After reading the first symbol, we have seen the start state and a next state.)
But if you prefer to use \(x=0^{N+1}1^{N+1}\) just to make the argument easier, that’s fine too.