nannon-book

Reddit Bible

About

In short, the Reddit Bible compares questions found on 8 advice boards of reddit with interrogative sentences found in the bible.

After much wrangling with religious-text based concepts, I decided to create a "Reddit Advice" board by answering biblical questions with Reddit content. I scraped the Bible for interrogative sentences, and found similarities in questions scraped from Reddit advice boards, using sentence embedding. From there, I wanted to to answer the Bible Questions with Markov-chain generated answers based on the thread responses of the respective Reddit questions.

Unexpectedly, I loved the results of the "similar" reddit and bible questions--the seeming connections between the two almost hint at some sort of relation between the questions people are asking now and in Biblical times. Though I did go through with the Markov chained responses, the results were a bit jumble-y and seemed to take away from what I liked about the juxtaposed questions. Ultimately, I made the decision to cut the Markov chains, and to highlight the contrast in pairs of questions, as well as how similar the computer thinks the question pairs are.

Inspiration

I originally wanted to generate some sort of occult text, ie. Chinese Divination. I ended up pivoting to the more normative of religious texts, the Bible to be specific, since I have a lot of personal experience with this one. Prior to the "reddit advice" board, I actually had the opposite idea, of making a "christian advice" board where I would gather 4chan questions, and answer them with markov chain generated responses based on real christian advice forums. I scraped a christian advice forum, but the results were too few and inconsistent, so I knew I had to pivot a bit. That's when I flipped the idea and decided to reverse it to answering bible questions with reddit data. (4chan's threads were a little too inconsistent and lacking compared with reddits thousand-count response threads).

If anyone ever wants buttloads of  responses from a christian forumhttps://github.com/swlsheltie/christian-forum-data

Process

Once I solidified my concept, it was time to execute, one step at a time.

  1. Getting matching question pairs from Reddit and the Bible
    1. Getting questions from Reddit
      1. Reddit has a great API called PRAW
      2. Originally I only scraped r/advice, but towards the end, I decided to bump it up and scrape 8 different advice subreddits: r/advice, r/internetparents, r/legal_advice, r/need_a_friend, r/need_advice, r/relationship_advice, r/tech_support, r/social_skills
        1. Using PRAW, i looked at top posts of all time, with no limit
      3. r/Advice yielded over 1000 question sentences, and the other advice subreddits ranged more or less around that number.
      4. Lists with the lists of scraped questions:
        1. https://github.com/swlsheltie/reddit_bible/tree/master/lists%20of%20reddit%20questions
      5. # REDDIT QUESTIONS X BIBLE QUESTIONS
        # ENTER A SUBREDDIT
        #   FIND ALL QUESTIONS
        # INFERSENT WITH BIBLE QUESTIONS 
        # GET LIST OF [DISTANCE, SUBREDDIT, THREAD ID, SUBMISSION TEXT, BIBLE TEXT]
        #   HIGHLIGHT SPECIFIC QUESTION
         
         
        # load 400 iterations 
        # format [distance, subreddit, reddit question, bible verse, bible question]
         
         
         
        # get bible questions
        from final_bible_questions import bible_questions
        from split_into_sentences import split_into_sentences
         
        # write to file
        data_file= open("rqbq_data.txt", "w")
        rel_advice_file = open("rel_advice.txt", "w")
        legal_advice_file = open("legal_advice.txt", "w")
        tech_support_file = open("tech_support.txt", "w")
        need_friend_file = open("needfriend.txt", "w")
        internetparents_file = open("internetparents.txt", "w")
        socialskills_file = open("socialskills.txt", "w")
        advice_file = open("advice.txt", "w")
        needadvice_file = open("needadvice.txt", "w")
         
        files = [rel_advice_file, legal_advice_file, tech_support_file, need_friend_file, internetparents_file, socialskills_file, advice_file, needadvice_file]
         
        # libraries
        import praw
        import re
         
        # reddit keys
        reddit = praw.Reddit(client_id='---',
                             client_secret='---',
                             user_agent='script: http://0.0.0.0:8000/: v.0.0(by /u/swlsheltie)')
         
        # enter subreddit here ------------------------------------------------------------------------------------action required
        list_subreddits = ["relationship_advice", "legaladvice", "techsupport", "needafriend", "internetparents", "socialskills", "advice", "needadvice"]
         
        relationshipadvice = {}
        legaladvice={}
        techsupport={}
        needafriend={}
        internetparents={}
        socialskills={}
        advice={}
        needavice={}
         
        subreddit_dict_list=[relationshipadvice, legaladvice, techsupport, needafriend, internetparents, socialskills, advice, needavice]
         
        relationshipadvice_questions = []
        legaladvice_questions =[]
        techsupport_questions=[]
        needafriend_questions=[]
        internetparents_questions=[]
        socialskills_questions=[]
        advice_questions=[]
        needavice_questions=[]
         
        questions_list_list=[relationshipadvice_questions, legaladvice_questions, techsupport_questions, needafriend_questions, internetparents_questions, socialskills_questions, advice_questions, needavice_questions]
         
         
        # sub_reddit = reddit.subreddit('relationship_advice')
         
        for subreddit in list_subreddits:
            counter=0
            print(subreddit)
            i = list_subreddits.index(subreddit)
            sub_reddit = reddit.subreddit(subreddit)
            txt_file_temp = []
         
            for submission in sub_reddit.top(limit=1000):
                # print(submission)
                print("...getting from reddit", counter)
                submission_txt = str(reddit.submission(id=submission).selftext.replace('\n', ' ').replace('\r', ''))
                txt_file_temp.append(submission_txt)
                counter+=1
            print("gottem")
            for sub_txt in txt_file_temp:
                print("splitting")
                sent_list = split_into_sentences(sub_txt)
         
                for sent in sent_list:
                    print("grabbing questions")
                    if sent.endswith("?"):
                        questions_list_list[i].append(sent)
            print("writing file")
            files[i].write(str(questions_list_list[i]))
            print("written file, next")
         
        # for list_ in questions_list_list:
        #     print("\n")
        #     print(list_subreddits[questions_list_list.index(list_)])
        #     print(list_)
        #     print("\n")
    2. Getting questions from the bible
      1. Used the King James Version, because of this awesome text file, that only has the text in it (no verse numbers, etc) (would bite me in the ass later on)
      2. Found some code on Stack Overflow that allowed me to get a list of the sentences in the bible
      3. Originally used RITA to get the question sentences, then towards the end (since rita --> python was too much of a hassle), I just went through found all sentences that ended with a "?".
        1. file = open("bible.txt", "r")
          empty= open("bible_sent.txt", "w")
          bible = file.read()
           
          from nltk import tokenize
          import csv
          import re
           
          alphabets= "([A-Za-z])"
          prefixes = "(Mr|St|Mrs|Ms|Dr)[.]"
          suffixes = "(Inc|Ltd|Jr|Sr|Co)"
          starters = "(Mr|Mrs|Ms|Dr|He\s|She\s|It\s|They\s|Their\s|Our\s|We\s|But\s|However\s|That\s|This\s|Wherever)"
          acronyms = "([A-Z][.][A-Z][.](?:[A-Z][.])?)"
          websites = "[.](com|net|org|io|gov)"
           
          # final_output=[]
           
          def split_into_sentences(text):
              text = " " + text + "  "
              text = text.replace("\n"," ")
              text = re.sub(prefixes,"\\1",text)
              text = re.sub(websites,"\\1",text)
              if "Ph.D" in text: text = text.replace("Ph.D.","PhD")
              text = re.sub("\s" + alphabets + "[.] "," \\1 ",text)
              text = re.sub(acronyms+" "+starters,"\\1 \\2",text)
              text = re.sub(alphabets + "[.]" + alphabets + "[.]" + alphabets + "[.]","\\1\\2\\3",text)
              text = re.sub(alphabets + "[.]" + alphabets + "[.]","\\1\\2",text)
              text = re.sub(" "+suffixes+"[.] "+starters," \\1 \\2",text)
              text = re.sub(" "+suffixes+"[.]"," \\1",text)
              text = re.sub(" " + alphabets + "[.]"," \\1",text)
              if "”" in text: text = text.replace(".”","”.")
              if "\"" in text: text = text.replace(".\"","\".")
              if "!" in text: text = text.replace("!\"","\"!")
              if "?" in text: text = text.replace("?\"","\"?")
              text = text.replace(".",".")
              text = text.replace("?","?")
              text = text.replace("!","!")
              text = text.replace("",".")
              sentences = text.split("")
              sentences = sentences[:-1]
              sentences = [s.strip() for s in sentences]
              return (sentences)
              # final_output.append(sentences)
           
          print(split_into_sentences(bible))
           
          # with open('christian_forums1.csv', newline='') as csvfile:
          #     reader = csv.reader(csvfile)
          #     for row in reader:
          #         for i in range(2): 
          #             if (i==1) and (row[i]!= ""):
          #                 input_txt = row[0]
          #                 # print(text)
          #                 # list_sent = tokenize.sent_tokenize(text)
          #                 # sentences.append(list_sent)
          #                 list_sent= split_into_sentences(input_txt)
          #                 final_output.append(list_sent)
           
          # list_sent = split_into_sentences(bible)
          # for sent in list_sent:
          #     # print(sent)
          #     empty.write(sent+"\n")
          # empty.close()
          # print(list_sent)
      4. Results: around 1000+ Bible questions, find them here
    3. Get matching pairs!!! 
      1. My boyfriend suggested that I use sentence embeddings to find the best matching pairs. This library is the one i used. It was super easy to install+use! 4.75 Stars
      2. Infersent pumps out a 2d matrix containing each vector of each sentence in the list of sentences that you provide. I gave it the list of bible questions, then ran a loop to get the embeddings of all 8 subreddit question lists.
      3. Then another loop with matrix multiplication to get each matrix with the distances between the bible vs. [respective] subreddit sentences.
      4. Since the matrix contains such precise information about how close two sentences, are I wanted to visualize this data. I saved the "distances," and used circle size to show how close they are.
        1. I didn't have that much time to visually design the book, which I regret, and the circle sizes were obviously not that communicative about what they represented. I ended up including a truncated version of the distances on each page.
    4. Markov chaining the responses
        1. Now that I had my reddit questions to their bible question counterparts, I wanted to get a mishmash of the respective response threads of the submissions that the questions came from.
        2. submission = reddit.submission(pair["thread_id"])
          for top_level_comment in submission.comments:
              if isinstance(top_level_comment, MoreComments):
                  continue
              # print(top_level_comment.body)
              comment_body.append(top_level_comment.body)
           
          comment_body_single_str="".join(comment_body)
          comment_body = split_into_sentences(comment_body_single_str)
           
          # print(" ".join(comment_body))
          text_model = markovify.Text(comment_body)
          for i in range(10):
              print(text_model.make_sentence())
        3. This was an example of my result:
          1.  Submission Data: {'thread_id': '7zg0rt', 'submission': 'Deep down, everyone likes it when someone has a crush on them right?   So why not just tell people that you like them if you do? Even if they reject you, deep down they feel good about themselves right? '}

            Markov Result: Great if you like it, you generally say yes, even if they have a hard time not jumping for joy.

            Us guys really do love it when someone has a crush on them right?

            I think it's a much more nuanced concept.

            In an ideal world, yesDepends on the maturity of the art.

            But many people would not recommend lying there and saying "I don't like you", they would benefit from it, but people unequipped to deal with rejection in a more positive way.

            If it's not mutual, then I definitely think telling them how you feel, they may not be friends with you or ignore you, or not able to answer?

            I've always been open about how I feel, and be completely honest.

            In an ideal world, yesDepends on the maturity of the chance to receive a big compliment.

            So while most people would argue that being honest there.

            I think that in reality it's a much more nuanced examples within relationships too.

        4. While this was ok, I felt like the Markov data just probably wasn't good (extensive) enough to sound that different from the source. It didn't seem like this would add anything to the concept, so I decided to cut it.
    5. Organizing the matching pairs 
      1. The part that I probably had the most difficulty with was trying to organize the lists into [least distant] to [most distant] pairs. Seemed really easy in my head, but for reason, I just had a lot of trouble executing.
        1. However, it paid off in the end, as I was able to get random pairs from only the closest related 100 pairs from each of the 8 subreddit/bible lists.
        2. import numpy
          import torch
          import json
           
           
          from rel_advice import rel_advice
          from legal_advice import legal_advice
          from techsupport import techsupport
          from needfriend import needfriend
          from internetparents import internet_parents
          from socialskills import socialskills
          from advice import advice
          from needadvice import needadvice
           
           
          from final_bible_questions import bible_questions
          from bible_sent import bible_sentences
           
          final_pairs= open("final_pairs.txt", "w")
          # final_pairs_py= open("final_pairs_py.txt", "w")
           
          with open("kjv.json", "r") as read_file:
              bible_corpus = json.load(read_file)
           
          rqbq_data_file = open("rqbq_data.txt", "w")
          subreddit_questions = [rel_advice, legal_advice, techsupport, needfriend, internet_parents, socialskills, advice, needadvice]
          list_subreddits = ["relationship_advice", "legaladvice", "techsupport", "needafriend", "internetparents", "socialskills", "advice", "needadvice"]
           
          bible_verses=[]
          # for 
           
           
          from models import InferSent
          V = 2
          MODEL_PATH = 'encoder/infersent%s.pkl' % V
          params_model = {'bsize': 64, 'word_emb_dim': 300, 'enc_lstm_dim': 2048,
                          'pool_type': 'max', 'dpout_model': 0.0, 'version': V}
           
          print("HELLO", MODEL_PATH)
          infersent = InferSent(params_model)
          infersent.load_state_dict(torch.load(MODEL_PATH))
           
          W2V_PATH = 'dataset/fastText/crawl-300d-2M.vec'
          infersent.set_w2v_path(W2V_PATH)
           
          with open("encoder/samples.txt", "r") as f:
              sentences = f.readlines()
           
          infersent.build_vocab_k_words(K=100000)
          infersent.update_vocab(bible_sentences)
           
          print("embed bible")
          temp_bible = bible_questions 
          embeddings_bible = infersent.encode(temp_bible, tokenize=True)
          normalizer_bible = numpy.linalg.norm(embeddings_bible, axis=1)
          normalized_bible = embeddings_bible/normalizer_bible.reshape(1539,1)
           
           
           
           
          pairs = {}
          for question_list in range(len(subreddit_questions)):
              pairs[list_subreddits[question_list]]={}
           
              print("setting variables: ", list_subreddits[question_list])
              temp_reddit = subreddit_questions[question_list] #TRIM THESE TO TEST SMALLER LIST SIZES
           
           
              print("embed", list_subreddits[question_list], "questions")
              embeddings_reddit = infersent.encode(temp_reddit, tokenize=True)
           
              print("embed_reddit dim: ",embeddings_reddit.shape)
              print("embed_bible dim: ", embeddings_bible.shape)
           
              normalizer_reddit = numpy.linalg.norm(embeddings_reddit, axis=1)
           
           
              print("normalizer_reddit dim: ", normalizer_reddit.shape)
              print("normalizer_bible dim: ", normalizer_bible.shape)
           
              temp_tuple = normalizer_reddit.shape
           
              normalized_reddit = embeddings_reddit/normalizer_reddit.reshape(temp_tuple[0],1)
           
           
              print("normalized_reddit dim:", normalized_reddit)
              print("normalized_bible dim:", normalized_bible)
           
           
              print("normed normalized_reddit dim: ", numpy.linalg.norm(normalized_reddit, ord=2, axis=1))
              print("normed normalized_bible dim: ", numpy.linalg.norm(normalized_bible, ord=2, axis=1))
           
              reddit_x_bible = numpy.matmul(normalized_reddit, normalized_bible.transpose())
              print("reddit x bible", reddit_x_bible)
           
              matrix = reddit_x_bible.tolist()
              distances = []
              distances_double=[]
              distances_index = []
              bible_indeces=[]
           
           
              for reddit_row in matrix:
                  closest = max(reddit_row)
                  bible_indeces.append(reddit_row.index(closest))
                  distances.append(closest)
                  distances_double.append(closest)
                  cur_index = matrix.index(reddit_row)
           
                  final_pairs.write("\n-------\n" + "distance: "+ str(closest)+"\n" +str(list_subreddits[question_list])+"\n"+subreddit_questions[question_list][cur_index]+"\n"+ bible_questions[reddit_row.index(closest)]+"\n-------\n")
           
           
              distances.sort()
              distances.reverse()
              for distance in distances: 
                  inde_x = distances_double.index(distance)
                  distances_index.append(inde_x)
           
              pairs[list_subreddits[question_list]]["distances"]=distances
              pairs[list_subreddits[question_list]]["distances_indexer"]=distances_index
              pairs[list_subreddits[question_list]]["bible_question"]=bible_indeces
              # print(pairs)
          rqbq_data_file.write(str(pairs))
          rqbq_data_file.close()
           
           
           
          # for pair in pairs: 
          #     # print( "\n-------\n", reddit_questions[pair],"\n", bible_questions[pairs[pair]], "\n-------\n")
          #     # export_list.append([max_nums[counter], pair, pairs[pair], reddit_questions[pair],  bible_questions[pairs[pair]]])
          #     counter+=1
           
          # # final_pairs_py.write(str(export_list))
          # # final_pairs_py.close()
          # final_pairs.close()
           
                  # nums.append(closest)
                  # max_nums.append(closest)
                  # for distance in max_nums:    
                  # row = nums.index(distance) #matrix row 
                  # column = matrix[row].index(distance)
                  # pairs[row]= column
                  # pairs[list_subreddits[question_list]][closest]={}
           
                  # reddit_bodies.append()
           
          # export_list = []
          #     nums=[]
          #     max_nums = []
          #         max_nums.sort()
          #     max_nums.reverse()
           
           
          # load 400 iterations 
          # format [distance, subreddit, reddit question, bible verse, bible question]
           
          # build dictionary in loop, and keep list of min distances 
           
          # final_pairs.write(str(pairs))
           
          # counter = 0
           
          # bible_x_reddit = numpy.matmul(embeddings_bible, reddit_trans)
          # print(bible_x_reddit)
  2. Basil.js
    1. Using Basil's CSV method, I was able to load pair data into the book.
      1. from rqbq_data import rqbq_dictionary
        from bibleverses import find_verse
        import random
        import csv
         
         
        from rel_advice import rel_advice
        from legal_advice import legal_advice
        from techsupport import techsupport
        from needfriend import needfriend
        from internetparents import internet_parents
        from socialskills import socialskills
        from advice import advice
        from needadvice import needadvice
         
        from final_bible_questions import bible_questions
         
         
         
        # print(len(rqbq_dictionary))
        # print(rqbq_dictionary.keys())
         
        list_subreddits = ["relationship_advice", "legaladvice", "techsupport", "needafriend", "internetparents", "socialskills", "advice", "needadvice"]
        subreddit_questions = [rel_advice, legal_advice, techsupport, needfriend, internet_parents, socialskills, advice, needadvice]
         
         
        write_csv=[]
         
        def getPage():
            subreddit_index=random.randint(0,7)
            subreddit = list_subreddits[subreddit_index]
            print("subreddit: ", subreddit)
            length = len(rqbq_dictionary[subreddit]["distances"])
            print("length: ", length)
            random_question = random.randint(0,500) #SPECIFY B AS CUT OFF FOR REDDIT/BIBLE ACCURACY. 1=MOST ACCURATE, LENGTH-1 = LEAST ACCURATE 
            print("random question num: ", random_question)
            print("distance of random question: ", rqbq_dictionary[subreddit]["distances"][random_question])
            print("index of random question: ", rqbq_dictionary[subreddit]["distances_indexer"][random_question])
            index_rand_q=rqbq_dictionary[subreddit]["distances_indexer"][random_question]
            index_rand_q_bible = rqbq_dictionary[subreddit]["bible_question"][index_rand_q]
            # print(index_rand_q, index_rand_q_bible)
            print("question: ", subreddit_questions[subreddit_index][index_rand_q])
            print("verse: ", bible_questions[index_rand_q_bible])
            verse = find_verse(bible_questions[index_rand_q_bible])
            write_csv.append([rqbq_dictionary[subreddit]["distances"][random_question], subreddit, subreddit_questions[subreddit_index][index_rand_q], verse, bible_questions[index_rand_q_bible]])
        # getPage()
         
        for i in range(15):
            getPage()
         
        with open('redditxBible.csv', 'w', newline='') as f:
                writer = csv.writer(f)
                writer.writerow(["distance", "subreddit", "reddit_question", "verse", "bible_question"])
                writer.writerows(write_csv)
      2. Example of one of the books' CSV: redditxBible
    2. As said earlier, not having the Bible verse data was a pain when I realized it would be nice to have the verses. So I had to run each bible question in this code to get the verses:
      1. import json
         
        with open("kjv.json", "r") as read_file:
            bible_corpus = json.load(read_file)
         
        sample = " went up against the king of Assyria to the river Euphrates: and king Josiah went against him; and he slew him at Megiddo, when he had seen him."
         
        def find_verse(string):
            for x in bible_corpus["books"]:
                for chapter  in x["chapters"]: 
                    # print(chapter["verses"])
                    for verse in chapter["verses"]:
                        if string in verse["text"]:
                            return (verse["name"])
    3. Designing the book
      1. Unfortunately, I didn't save many iterations from my design process, but I did play with having two columns, and other ways of organizing the text+typography.
    4. I had used Basil.js before in Kyu's class last year, so that was really helpful in knowing how to auto-resize the text box sizes.
      1. That way, I was able to get exact distances between all the text boxes.
    5.  I had some trouble with rotating the text boxes and getting the locations after rotation.
    6. The circle was drawn easily by mapping the distance between sentences to a relative size of the page.
Examples

00-nannon

01-nannon

02-nannon

03-nannon

05-nannon

Thoughts

Overall, i really enjoyed making this book. This project was super interesting in that I feel like I really had to apply more programming knowledge than previous projects in order to combine all the parts to get fully usable code. They couldn't only work disparately, they had to able to work all together too. Piping all the parts together was definitely the toughest part. I only wish I had more time to design the book, but i will probably continue working on that onwards.

See a list of ALL (8000+) the pairs here

rigatoni-book

Punk Rock Album Generator
The Punk Rock Album Generator is spreading anarchy and rebelling against the system with its questionably plagiarized lyrics

Going into this project I was thinking about how I typically don't incorporate my identity as a musician into my work, both in CFA and in SCS. Until recently my passion for music and punk rock has been a separate entity from my identity as an artist and creative coder. While I was looking at various books looking for inspiration I found my band's bass guitarist's massive Leonard Cohen bass songbook of popular rock songs, and I realized there is a standard set of conventions used in tabs, transcripts and sheet music that are pretty much universally followed: lyrics spaced over on top of bars of sheet music/tablature. To me this was a great way to get my feet wet with letting my musical self into my art world.

I used Markov chains over a relatively small and hand-picked corpus that consisted of lyrics from my favorite Green Day, Offspring, Rage Against The Machine and Sublime albums. After some tinkering I was satisfied with the whacky lyrical compositions being spit out by my program and I thought it was cool that every now and then I'd recognize the particular 2-3 songs that were being mashed together and when I shared the lyrics with my band they picked up on this as well.

At first I thought learning Basil to handle layouts was going to slow me down in that I'd have to learn a new skill, but in retrospect Basil was a very versatile and intuitive program and without it I could not have generated my totally random sheet music. Were I to revisit this project I would want to generate actual MIDI files based in music theory that actually sound like coherent songs, as well as add parts for more instruments. This is one of the few projects I have done this semester that I want to keep going with until I feel like I can sit down and record myself performing an entire procedurally-generated punk rock album on every instrument I know.

casher-book

Lyric Poems

A collection of rhyming poetry generated from songs from the last 50 years.

PDFs: Lyric Poems

Inspiration for this project initially came from my dad. After I told him about the guidelines and that we were going to be pulling data from random corpora, he sent me a link to a list of 10,000 fake band names. This had me thinking about possibilities that related to music. I'm part of The Cut, CMU's music magazine, so at first I thought it would be fun to generate a whole fake magazine article about a new band's first tour -- their name, their genre, a bio on each member, a generated graphic of each member (like a Bitmoji, from different head pieces), the set list of the tour, the cities that they were touring, and a mini concert review -- which I would try to include in the next issue as a joke. I realized while planning out ideas that 1) each page of the article would probably end up just being a repetitive list of words randomly chosen from each corpus, and 2) I probably wouldn't have enough time to execute all of my ideas well. Therefore, I decided I should pick one or a few of the ideas. When I was researching how to generate song titles to include in the set lists, one corpus I found also had 2.5 million song lyrics. So then I realized I could pull lyrics from those songs and make my own songs out of them.

What was especially interesting about the generative process here was that a lot of the lines make sense (meaningful sense), no matter where in the list of 2.5 million they came from. Even with a rhyme scheme. I like how this  illustrates that artists have written about the same themes in their songs for decades: love, life, and loss. The combinations ended up abstract enough, however, that they resembled poetry more than songs. I ironically titled the chapter "Lyric Poetry" after an actual genre of poetry in which the poet specifically addresses emotions and feelings -- funny because, on the surface, some of these poems can seem very deep, beautiful, and emotional, but the fact that they were generated by a computer means that there is actually zero intent behind the words, making them emotionless on the deeper level.

Through the development I also discovered a bug with RiTa -- she has trouble rhyming the word "eight."

import processing.pdf.*;
import rita.*;
import cbl.quickdraw.*;
boolean bExportingPDF; 
// See https://processing.org/reference/libraries/pdf/index.html
 
int nPages; 
PImage imageArray[];
JSONArray jsonPages;
int outputPageCount = 0; 
float inch = 72;
 
String[] bandnames; 
String[] genres;
String[] lyrics;
String[] stanza;
String[] titles;
String[] colors = {"#5b97f7", "#a07ef7", "#ffb5ec"};
int numcircles = (int) random(8, 16);
QuickDraw qd;
 
int x = 160, y = 240;
RiLexicon lexicon;
Boolean pressed = true;
int numstanzas;
 
PFont caviar25;
PFont pulsab70;
PFont font1;
PFont font2;
 
Boolean on = true;
int numcircles1 = 14;
 
 
//=======================================================
void setup() {
 
// The book format is 6" x 9". 
// Each inch is 72 pixels (or points). 
// 6x72=432, 9*72=648
 
// USE THESE COMMANDS WHEN YOU'RE NOT EXPORTING PDF, 
// AND YOU JUST WANT SCREEN DISPLAY:
//size(432, 648); 
//bExportingPDF = false;
//
// BUT: IMPORTANT: 
// WHEN IT IS TIME TO EXPORT THE PDF, 
// COMMENT OUT THE ABOVE size() COMMAND, AND
// USE THE FOLLOWING TWO COMMANDS INSTEAD:
size(432, 648, PDF, "poems25.pdf");
bExportingPDF = true;
 
background(255);
lexicon = new RiLexicon();
bandnames = loadStrings("bandnames1.txt");
genres = loadStrings("musicgenres.txt");
lyrics = loadStrings("SONGS.txt");
titles = loadStrings("songtitles.txt");
qd = new QuickDraw(this, "guitars.ndjson");
caviar25 = createFont("Arvo-Regular.ttf", 10);
pulsab70 = createFont("Alpaca Scarlett Demo.ttf", 25);
font1 = createFont("Arvo-Regular.ttf", 28);
font2 = createFont("Alpaca Scarlett Demo.ttf", 50);
 
}
 
//=======================================================
void draw() {
if (bExportingPDF) {
drawForPDFOutput();
} else {
drawForScreenOutput();
}
}
 
//=======================================================
void drawForPDFOutput() {
 
// When finished drawing, quit and save the file
if (outputPageCount >= nPages) {
exit();
} 
else if (outputPageCount == 0) {
title();
PGraphicsPDF pdf = (PGraphicsPDF) g;
if (outputPageCount < (nPages-1)) {
pdf.nextPage();
}
}
else {
drawPage(outputPageCount); 
PGraphicsPDF pdf = (PGraphicsPDF) g; // Get the renderer
if (outputPageCount < (nPages-1)) {
pdf.nextPage(); // Tell it to go to the next page
}
}
 
outputPageCount++;
}
 
 
//=======================================================
void drawForScreenOutput() {
int whichPageIndex = (int) map(mouseX, 0, width, 0, nPages);
drawPage(whichPageIndex);
println(whichPageIndex);
}
 
//=======================================================
void drawPage(int whichPageIndex) {
background(255);
whichPageIndex = constrain(whichPageIndex, 0, nPages-1); 
fill(0);
//if (pressed == true) {
background(255);
for (int e = 0; e < numcircles; e++) {
noFill();
strokeWeight(random(.1,3.5));
int colorr = (int) random(170,255);
int colorg = (int) random(170,220);
int colorb = (int) random(220,245);
int rad = (int) random(30, 200);
int x = (int) random(0, width);
int y = (int) random(0, height);
//String col = random(colors);
int p = (int) random(1, 8);
for (int g = 0; g < p; g++) { stroke(colorr, colorg, colorb, 225-(g*40)); ellipse(x, y, rad, rad); rad*=.89; } } strokeWeight(5); pushMatrix(); stroke(180); int guitarIndex = (int) random(0, 80); //print("g", guitarIndex); qd.create(0,random(height/2-100, height/2+100), 140, 140, guitarIndex); popMatrix(); fill(255,180,80); textFont(pulsab70); drawText(); fill(0); textFont(caviar25); makeStanza(); } void drawText() { //textSize(70); textAlign(CENTER); int titleIndex = (int) random(titles.length); textAlign(LEFT); String title = titles[titleIndex]; String[] wordsInTitle = RiTa.tokenize(title); if (wordsInTitle.length > 3) {
titleIndex+=1;
}
text(titles[titleIndex], 30, height/5);
}
 
void makeStanza() {
textAlign(LEFT);
numstanzas = (int) random(1, 8);
for (int j = 0; j < numstanzas; j++) {
for (int i = 0; i < 2; i++) {
int lyricsIndex1 = (int) random(lyrics.length);
String lyric1 = getLyric(lyrics, lyricsIndex1);
text(lyric1, 30, height/5 + 25 + 60*j + (i*2)*13);
String lyric2 = getRhymingLyric(lyrics, lyric1, lyricsIndex1);
text(lyric2, 30, height/5 + 25 + 60*j + (i*2)*13+13);
}
}
}
 
String getLyric(String lyrics[], int lyricsIndex) {
String lyric = lyrics[lyricsIndex];
if (lyric.length() < 4 ) { return getLyric(lyrics, lyricsIndex + 1); } if ((lyric.toLowerCase().contains("chorus")) || (lyric.toLowerCase().contains("verse")) || (lyric.toLowerCase().contains("[")) || (lyric.toLowerCase().contains("(")) || (lyric.toLowerCase().contains("nigga"))) { return getLyric(lyrics, (int) random(lyrics.length)); } char firstLetter = lyric.charAt(0); char lastLetter = lyric.charAt(lyric.length()-1); if (firstLetter == '\"') { return lyric.substring(1); } if (((firstLetter >= 'a' && firstLetter <= 'z') || (firstLetter >= 'A' && firstLetter <= 'Z')) && ((lastLetter >= 'a' && lastLetter <= 'z') || (lastLetter >= 'A' && lastLetter <= 'Z'))) {
return lyric;
} else {
return getLyric(lyrics, lyricsIndex + 1);
}
}
 
String getRhymingLyric(String[] lyrics, String lyric, int lyricsIndex) {
 
String[] words = RiTa.tokenize(lyric); //words of previous lyric
if (words.length < 3) {
words = RiTa.tokenize(getLyric(lyrics, lyricsIndex+1));
}
String lastWord = words[words.length-1]; //last word of previous lyric
for (int i = (int) random(lyrics.length); i < lyrics.length; i++ ) {
if (abs(lyricsIndex-i) <= 300) {
continue;
}
 
String newLyric = lyrics[i];
if (newLyric.length() < 3) {
continue;
}
if (newLyric.toLowerCase().equals(lyric.toLowerCase())) {
return getRhymingLyric(lyrics, lyric, lyricsIndex);
}
 
String[] newWords = RiTa.tokenize(newLyric); //words of previous lyric
if ((newWords.length < 3) || (newWords.length > 10)) {
continue;
}
if (newWords[0].startsWith("\"")) {
newWords[0] = newWords[0].substring(1);
}
 
String newLastWord = newWords[newWords.length-1]; //last word of previous lyric
String lastWordLC = lastWord.toLowerCase();
String newLastWordLC = newLastWord.toLowerCase();
int count = 0;
for (int n = 0; n < newWords.length; n++) { String word = newWords[n].toLowerCase(); if ((word.equals("chorus")) || (word.equals("nigga")) || (word.equals("christmas")) || (word.equals("santa"))) { count+=1; } } if (count > 0) {
continue;
}
if (newLastWordLC.equals("eight")) {
continue;
}
if (lastWordLC != newLastWordLC) {
if (lyric.toLowerCase() != newLyric.toLowerCase()) {
if (RiTa.isRhyme(lastWordLC, newLastWordLC, false)) {
return newLyric;
} else {
continue;
}
}
}
}
return getLyric(lyrics, (int)random(lyrics.length-1));
}
 
void title() {
background(255);
//if (on) {
for (int e = 0; e < numcircles1; e++) {
noFill();
strokeWeight(random(.1,3.5));
int colorr = (int) random(170,255);
int colorg = (int) random(170,220);
int colorb = (int) random(220,245);
int rad = (int) random(30, 200);
int x = (int) random(0, width);
int y = (int) random(0, height);
//String col = random(colors);
int p = (int) random(1, 8);
for (int g = 0; g < p; g++) {
stroke(colorr, colorg, colorb, 225-(g*40));
ellipse(x, y, rad, rad);
rad*=.89;
}
}
textAlign(CENTER);
fill(255,180,80);
textFont(font2);
text("Lyric Poetry", width/2, height/3+50);
fill(0);
textFont(font1);
text("by Casher", width/2, height/3+85);
}

Spoon-Book

For my generative book, I chose to make a bad design student. The school of design teaches a required course for its freshman students called Placing, in which students produce photo essays. I wrote an algorithm that would generate these essays by taking other students' essays (with their permission) as source material for a markov chain. The resulting essays are a mish-mosh of all sorts of different content that addresses the same assignment (every time the code is run it produces an essay that addresses one of the four assignments given in Placing). These essays do not make much sense, but they tend to contain gems of crazy, nonsensical sentences or outlandish claims about society that are the result of multiple different sentences from the source material being put together in a strange way.

After the text is generated and saved into a JSON file, it is fed through a Basil.js script that places the resulting text onto different pages and pairs each paragraph with a photograph randomly pulled from a collection of photos used in students' essays for that particular assignment.

The code for the text generator is just a Markov chain with no additional logic added to it. I spent some time experimenting with the n-value for the chain because I was trying to find a balance between sentences that were more or less grammatical and not simply lifting full sentences directly from the source material. The code generates a random number of sentences between a range of 60 to 75. It then splits the resulting text into paragraphs of 5 sentences each.

The Basil.js script creates a title page for the book, then lays out two generated Placing essays (I select these essays by hand). Each page of the laid out essay features an image paired with one paragraph.

I'm not totally satisfied with the results. I would have liked the essays to be a little more interesting. At this point, they are more or less just a random set of random sentences that happen to focus on the same general topic. The essays are not random enough to be funny, but they don't make enough sense to be interesting for other reasons. I might be able to it more interesting by increasing the dataset or by having some sort of logical decision making in the code to focus the sentences a little more.

 
 

var lines = [], markov, data = [], x = 160, y = 240;
 
var paragraphs = [];
 
var essay;
 
class Paragraph {
    constructor(title, text) {
        this.title = title;
        this.text = text;
    }
}
 
function preload() {
    var essayNum = int(random(4));
    switch(essayNum) {
    case 0 :
        essay = "Stuff";
        data[0] = loadStrings('src/Essays/stuff_delgado.txt');
        data[1] = loadStrings('src/Essays/stuff_vzhou.txt');
        data[2] = loadStrings('src/Essays/stuff_carpenter.txt');
        break;
    case 1 :
        essay = "Nature";
        data[0] = loadStrings('src/Essays/nature_delgado.txt');
        data[1] = loadStrings('src/Essays/nature_fang.txt');
        data[2] = loadStrings('src/Essays/nature_carpenter.txt');
        break;
    case 2 :
        essay = "Neighborhood";
        data[0] = loadStrings('src/Essays/neighborhood_carpenter_fan_zhang.txt');
        data[1] = loadStrings('src/Essays/neighborhood_cho_fang_nishizaki.txt');
        //data[2] = loadStrings('src/Essays/stuff_carpenter.txt');
        break;
    default :
        essay = "Trash";
        data[0] = loadStrings('src/Essays/trash_delgado_powell.txt');
        data[1] = loadStrings('src/Essays/trash_choe_fang.txt');
        data[2] = loadStrings('src/Essays/trash_carpenter_ezhou.txt');
        break;
 
    }
}
 
function setup() {
    createCanvas(500, 3500);
    textFont('times', 16);
    textAlign(LEFT);
 
    lines = ["click to (re)generate!"];
 
    // create a markov model w' n=4
    markov = new RiMarkov(3);
 
    // load text into the model
 
    for(var i = 0; i < data.length; i++) {
        markov.loadText(data[i].join(' '));
    }
 
    drawText();
}
 
function drawText() {
    background(250);
 
    if(lines.length <= 1) {
        text(lines.join(' '), x, y, 400, 400);
    }
 
    for(var i = 0; i < lines.length; i++) {
        var line = [lines[i]];
        text(line.join(' '), x, y + (i * 410), 400, 400 + (i * 410));
    }
}
 
function keyTyped() {
    if(key === ' ') {
        x = y = 50;
        var essayLength = int(random(75, 100));//int(random(4, 9));
        var fullEssay = markov.generateSentences(essayLength);
        for(var i = 0; i < int(essayLength / 10); i++) {
            lines[i] = "";
            if((i + 1) * 10 > essayLength) {
                for(var j = i * 10; j < essayLength; j++) {
                    lines[i] = lines[i].concat(' ', fullEssay[j]);
                }
            } else {
                lines[i] = "";
                for(var j = i * 10; j < (i + 1) * 10; j++) {
                    lines[i] = lines[i].concat(' ', fullEssay[j]);
                }
            }
        }
 
        var newParagraph = new Paragraph(essay, lines);
        paragraphs[0] = newParagraph;
        drawText();
 
        var output = {};
        output.paragraphs = paragraphs;
        createButton('SAVE PARAGRAPHS')
            .position(450, 3450)
            .mousePressed(function() {
                saveJSON(output, 'essay.json');
            });
    }
}

 

 

#include "../../bundle/basil.js";
 
var numPages;
 
var neighborNums = [48, 40, 33];
var neighborNames = ["carpenter_fan_zhang", "cho_fang_nishizaki", "gersing_kim_powell"];
 
var natureNums = [5, 13, 6, 6, 5];
var natureNames = ["carpenter", "delgado", "fang", "powell", "zhai"];
 
var stuffNums = [8, 9, 2, 8, 19, 7];
var stuffNames = ["carpenter", "delgado", "fang", "powell", "vzhou", "zhai"];
 
var trashNums = [13, 12, 15, 6, 5];
var trashNames = ["carpenter_ezhou", "choe_fang", "choi_zhai", "delgado_powell", "zhai_vzhou"];
 
var usedImages = [];
 
function setup() {
	var jsonString2 = b.loadString("neighborhood/essay (18).json");
	var jsonString1 = b.loadString("stuff/essay (44).json");
 
	b.clear (b.doc());
 
	var jsonData1 = b.JSON.decode( jsonString1 );
	var paragraphs1 = jsonData1.paragraphs;
	var jsonData2 = b.JSON.decode( jsonString2 );
	var paragraphs2 = jsonData2.paragraphs;
	b.println("paragraphs: " + paragraphs1.length + "+" + paragraphs2.length);
 
	var inch = 72;
 
	var titleW = inch * 5.0;
	var titleH = inch * 0.5;
	var titleX = (b.width / 2) - (titleW / 2);
	var titleY = inch;
 
	var paragraphX = inch / 2.0;
	var paragraphY = (b.height / 2.0) + (inch * 1.5);
	var paragraphW = b.width - inch;
	var paragraphH = (b.height / 2.0) - (inch * 2.0);
 
	var imageX = inch / 2.0;
	var imageY = inch / 2.0;
	var imageW = b.width - (inch);
	var imageH = (b.height * 0.5) + inch;
 
	numPages = 0;
	usedImages.push("");
 
 
	//first page of book
	//newPage();
	numPages++;
 
	b.fill(0);
	b.textSize(52);
	b.textFont("Archivo Black", "Regular");
	b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN);
	b.text("Plagiarizing:", inch / 2.0, b.height / 2.0 - inch, b.width - inch, inch);
 
	b.fill(0);
	b.textSize(20);
	b.textFont("Archivo", "Bold");
	b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN);
	b.text("A reinterpretation of other people's photo-essays from Placing", inch / 2.0, b.height / 2.0, 
		b.width - inch, inch);
 
 
	//introduce first essay
	newPage();
	b.fill(0);
	b.textSize(36);
	b.textFont("Archivo Black", "Regular");
	b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN);
	b.text(paragraphs1[0].title + "ing", inch / 2.0, b.height - inch * 2, 
		b.width - inch, inch);
 
	b.noStroke();
	var coverImageName = imageName(paragraphs1[0].title);
	var coverImage = b.image(coverImageName, inch / 2.0, 
		inch  / 2.0, b.width - inch, (b.height / 2.0) + (2 * inch));
	coverImage.fit(FitOptions.PROPORTIONALLY);
 
	for(var i = 0; i &lt; paragraphs1[0].text.length; i++) {
		newPage();
		b.fill(0);
		b.textSize(12);
		b.textFont("Archivo", "Regular");
		b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN);
		b.text("\t" + paragraphs1[0].text[i].substring(1), paragraphX, paragraphY, 
			paragraphW, paragraphH);
 
		b.noStroke();
		var imgName = imageName(paragraphs1[0].title);
		var img = b.image(imgName, imageX, imageY, imageW, imageH);
		img.fit(FitOptions.PROPORTIONALLY);
	};
 
	if(numPages % 2 == 0) {
		newPage();
	}
 
	//Second Photo Essay
	newPage();
	b.fill(0);
	b.textSize(36);
	b.textFont("Archivo Black", "Regular");
	b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN);
	b.text(paragraphs2[0].title + "ing", inch / 2.0, b.height - inch * 2, 
		b.width - inch, inch);
 
	usedImages = [""];
 
	b.noStroke();
	coverImageName = imageName(paragraphs2[0].title);
	coverImage = b.image(coverImageName, inch / 2.0, inch  / 2.0, b.width - inch, 
		(b.height / 2.0) + (2 * inch));
	coverImage.fit(FitOptions.PROPORTIONALLY);
 
	for(var i = 0; i &lt; paragraphs2[0].text.length; i++) {
		newPage();
		b.fill(0);
		b.textSize(12);
		b.textFont("Archivo", "Regular");
		b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN);
		b.text("\t" + paragraphs2[0].text[i].substring(1), paragraphX, paragraphY, 
			paragraphW, paragraphH);
 
		b.noStroke();
		var imgName = imageName(paragraphs2[0].title);
		var img = b.image(imgName, imageX, imageY, imageW, imageH);
		img.fit(FitOptions.PROPORTIONALLY);
	};
 
	//give credit to original authors and photographs
	newPage();
	b.fill(0);
	b.textSize(14);
	b.textFont("Archivo", "Bold");
	b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN);
	b.text(paragraphs1[0].title + "ing", inch / 2.0, (b.height / 2.0) - (inch * 1.5), 
		b.width - inch, inch / 2.0);
 
	var authors = generateCredits(paragraphs1[0].title);
 
	b.textSize(12);
	b.textFont("Archivo", "Regular");
	b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN);
	b.text("Original text and photos by:", inch / 2.0, (b.height / 2.0) - inch, 
		b.width - inch, 14.4);
	b.text(authors.join(", "), inch, (b.height / 2.0) - inch + 14.4, 
		b.width - (inch * 1.5), inch - 14.4);
 
 
	b.textSize(14);
	b.textFont("Archivo", "Bold");
	b.textAlign(Justification.CENTER_ALIGN, VerticalJustification.CENTER_ALIGN);
	b.text(paragraphs2[0].title + "ing", inch / 2.0, (b.height / 2.0), 
		b.width - inch, inch / 2.0);
 
	authors = generateCredits(paragraphs2[0].title);
 
	b.textSize(12);
	b.textFont("Archivo", "Regular");
	b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN);
	b.text("Original text and photos by:", inch / 2.0, 
		(b.height / 2.0) + (inch * 0.5), b.width - inch, 14.4);
	b.text(authors.join(", "), inch, (b.height / 2.0) + (inch * 0.5) + 14.4, 
		b.width - (inch * 1.5), inch - 14.4);
 
	if(numPages % 2 != 0) {
		newPage();
	}
}
 
function newPage() {
	b.addPage();
	numPages++;
}
 
function imageName(assignment) {
	var fileName = "";
	while(usedImagesIncludes(fileName)){
		if(assignment == "Neighborhood") {	
			var i = b.floor(b.random(neighborNames.length));
			fileName = neighborNames[i] + b.floor(b.random(neighborNums[i]) + 1);
		} else if(assignment == "Nature") {	
			var i = b.floor(b.random(natureNames.length));
			fileName = natureNames[i] + b.floor(b.random(natureNums[i]) + 1);
		} else if(assignment == "Trash") {	
			var i = b.floor(b.random(trashNames.length));
			fileName = trashNames[i] + b.floor(b.random(trashNums[i]) + 1);
		} else {
			var i = b.floor(b.random(stuffNames.length));
			fileName = stuffNames[i] + b.floor(b.random(stuffNums[i]) + 1);	
		}
	}
	usedImages.push(fileName);
	return "images/" + assignment + "/" + fileName + ".jpg";
}
 
function usedImagesIncludes(fileName) {
	for(var i = 0; i &lt; usedImages.length; i++) {
		if(usedImages[i] == fileName)
			return true;
	}
	return false;
}
 
function generateCredits(assignment) {
	if(assignment == "Neighborhood") {	
		return ["Sebastian Carpenter", "Danny Cho", "Sophia Fan", "Alice Fang", "Margot Gersing",
				"Jenna Kim", "Julia Nishizaki", "Michael Powell", "Jean Zhang"];
	} else if(assignment == "Nature") {	
		return ["Sebastian Carpenter", "Daniela Delgado", "Alice Fang", 
				"Michael Powell", "Sabrina Zhai"];
	} else if(assignment == "Trash") {	
		return ["Sebastian Carpenter", "Eunice Choe", "Julie Choi", "Daniela Delgado", "Alice Fang", 
				"Mimi Jiao", "Michael Powell", "Sabrina Zhai", "Emily Zhou"];
	} else {
		return ["Sebastian Carpenter", "Daniela Delgado", "Michael Powell", 
				"Sabrina Zhai", "Vicky Zhou"];	
	}
}
 
b.go();

dinkolas-book

Bioinvasive Dingus

The U.S. judicial branch, bioinvasion, war, and sociology collide with the vocabularies of Luis Carroll and Steve Brule in their first and last ever mash-up.
Here is a .zip containing 25 iterations of 10-page chapters:

https://drive.google.com/file/d/1PfSEv24RcGyA8eCPXGnYXw5h3YIFgONi/view?usp=sharing

The text portion of this project was generated using a combination of Markov chains. First, the text body was generated from a corpus of a series of academic papers on serious subjects ranging from Supreme Court decisions to changing migration habits. These papers were selected from the MICUSP database of student papers. The Markov chain had an n-gram length of 4, and was word based.

Next, random nouns were selected from the text to be replaced with other generated words. The replacement words were generated letter by letter with an n-gram length of 2. They were generated from Luis Carroll's Jabberwocky and transcripts of Check It Out! With Doctor Steve Brule. These words in isolation can be read and heard here by clicking to generate a new word: https://editor.p5js.org/dinkolas/full/ryIcv99aX

The resultant text is a mishmash of technical jargon, actual nonsensical words, and serious problems with the world that are obscured by a dense dialect. Finally, images were generated by rotating and superimposing images from Yale's face database, and overlaying selected words from the "glossary" of generated words. These images are strung through the text, breaking it up, making it even more difficult to read line to line. Visually and textually, the generated nonsensical words make it almost impossible to parse the discussion of national and global crises.

Here's the code for the text:

var dingusGen, input, markBody, final
var bigJSON = { versions: [{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] },{pages: [] }] };
 
function preload()
{
  dingus = loadStrings('dingus.txt');
  input = loadStrings('input1.txt'); 
}
 
function setup() { 
  createCanvas(500,400);
  textSize(12);
  textAlign(LEFT);
	dingusGen = new Markov(2, dingus.join(" ").split(""), " ", 100);
  markBody = new RiMarkov(4);
  markBody.loadText(input.join(' '));
 
  for (var version = 0; version &lt; 25; version++)
  {
    for (var page = 0; page &lt; 10; page++)
    {
      bigJSON.versions[version].pages[page] = genText(version, page);
    }
  }
  saveJSON(bigJSON, "final1.json");
}
 
function genText(version, page) {
  background(255);
  var final = RiTa.tokenize(markBody.generateSentences(20).join(' '));
  var glossary = [];
 
  for (var i = 0; i &lt; final.length; i++)
  {
    if (RiTa.isNoun(final[i]) &amp;&amp; random(1) &lt; 0.5)
    {
      var result = dingusGen.generate().join("")
      while (result.length &lt; final[i].length || result.length &gt; final[i].length+5)
      {
        result = dingusGen.generate().join("")
      }
      if (final[i].charAt(0) == final[i].charAt(0).toUpperCase())
      {
        result = result.charAt(0).toUpperCase() + result.slice(1);
      }
      if (random(1) &lt; 0.2)
      {
       	result = RiTa.pluralize(result); 
      }
      if (random(1) &lt; 0.2)
      {
        glossary.push(result.charAt(0).toUpperCase() + result.slice(1)); 
      }
 
      final[i] = result;
    }
  }
 
  var thisJSON = {}; 
  thisJSON.glossary = glossary;
  thisJSON.text = RiTa.untokenize(final);
  //text(version + "_" + page, 25, 25);
  text(RiTa.untokenize(final), 50, 50, 400, 400);
  return(thisJSON);
}
 
function Markov(n, input, end, maxLen) {
	this.n = n;
  this.ngrams = [];
  this.tokens = [];
  this.end = end;
  this.maxLen = maxLen;
  this.N = 0;
  this.indexFromGram = function(gram) {
    var index = 0;
    for (var i = 0; i &lt; n; i++)
    {
      index += this.tokens.indexOf(gram[i]) * pow(this.tokens.length, n - i - 1);
    }
    return index;
  }
  this.indexToGram = function(index) {
    var gram = [];
    for (var i = 0; i &lt; n; i++)
    {
      gram.unshift(this.tokens[(Math.floor(index/(pow(this.tokens.length, i)))) % (this.tokens.length)]);
    }
    return gram;
  }
 
  for (var i = 0; i &lt; input.length; i++)
  {
    if (!(this.tokens.includes(input[i])))
    {
      this.tokens.push(input[i]);
    }
  }
 
  for (i = 0; i &lt; pow(this.tokens.length, n); i++)
  {
    this.ngrams.push(0);
  }
 
  var gram = [];
  for (i = 0; i &lt; input.length - n + 1; i++)
  {
    gram = []
    for (var j = 0; j &lt; n; j++)
    {
      gram.push(input[i + j]);
    }
    this.ngrams[this.indexFromGram(gram)] ++;
  }
 
  for (i = 0; i &lt; this.ngrams.length; i++)
  {
    this.N += this.ngrams[i];
  }
 
 
  this.seed = function() {
    var randInd = Math.floor(random(this.N));
    var n = 0;
    for (var i = 0; i &lt; this.ngrams.length; i++) { n += this.ngrams[i]; if (n &gt; randInd) 
      {
        return this.indexToGram(i);
      }
    }
    print("seed is fucked");
    return [];
  }
 
  this.nextToken = function(gram) {
    gram.push(this.tokens[0]);
    var index0 = this.indexFromGram(gram);
    var N = 0;
    for (var i = 0; i &lt; this.tokens.length; i++)
    {
      N += this.ngrams[index0 + i];
    }
    var n = 0;
    var randInd = Math.floor(random(N));
    for (i = 0; i &lt; this.tokens.length; i++) { n += this.ngrams[index0 + i]; if (n &gt; randInd) return this.tokens[i];
    }
    print("nextToken is fucked");
    print(gram);
    return 0;
  }
 
  this.generate = function() {
    var out = this.seed();
    //print("out", out);
    var i = 0;
    while (out.includes(this.end) &amp;&amp; i &lt; this.maxLen)
    {
    	out = this.seed();
      i++
    }
    i = 0;
    while (out[out.length - 1] != this.end &amp;&amp; i &lt; this.maxLen)
    {
      out.push(this.nextToken(out.slice(out.length - n + 1, out.length)));
      i++;
    }
    return out.splice(0,out.length-1);
  }
}

And the code for the images:

var book;
var images = []
var img;
var offset;
var c;
var wordsSoFar;
 
function preload() {
  for (var i = 1; i &lt;= 230; i+=10) { var filename; if (i &gt;= 100){ filename = "" + i +".png";}
    else if (i &gt;= 10){filename = "0" + i + ".png";}
    else {filename = "00" + i + ".png";}
    images.push(loadImage("images/"+filename));
  }
  book = loadJSON('bigBoyFinal1.json');
  wordsSoFar = loadStrings('wordsSoFar.txt');
}
 
function setup() {
  c = createCanvas(300, 300);
  //imageMode(CENTER);
  textAlign(CENTER);
  textSize(40);
  background(200);
  var count = 0;
  print(book.versions[0].pages[0].glossary);
  for (var v = 0; v &lt; book.versions.length; v++)
  {
    for (var p = 0; p &lt; book.versions[v].pages.length; p++)
    {
      for (var w = 0; w &lt; book.versions[v].pages[p].glossary.length; w++)
      {
        genImage(book.versions[v].pages[p].glossary[w]);
      }
    }
  }
}
 
function genImage(word) {
  background(255);
  blendMode(BLEND);
  var offset = 1;
  for (var i = 0; i &lt; word.length/2; i++)
  {
    push();
    translate(150,150);
    rotate(random(360));
    translate(-150,-150);
    tint(255,127);
    image(images[Math.floor(random(images.length))], 0, 0, width, height*offset);
    blendMode(MULTIPLY);
    if (i % 2 == 0) blendMode(ADD);
    offset += 0.1;
    pop();
  }
  saveCanvas(c, Math.floor(random(50,100)), "jpg");
}

And the code for the PDF generation in basil.js:

#includepath "~/Documents/;%USERPROFILE%Documents";
 
#include "basiljs/bundle/basil.js";
 
// to run this example 
 
 
function draw() 
{
    console.log("hello");
 
	var dpi = 72;
	b.textSize(9);
	var json = b.JSON.decode(b.loadString("bigBoyFinal1.json"));
	b.clear(b.doc());
	for (var v = 0; v &lt; json.versions.length; v++)
	{
		b.page(1);
		var wSum = 0;
		b.clear(b.doc());
		for (var p = 0; p &lt; json.versions[v].pages.length; p++)
		{
			b.noStroke();
			b.fill(0);
			b.textFont('Gadugi', 'Regular');
			var currentText = b.text(json.versions[v].pages[p].text, 1*dpi, 1*dpi, 4*dpi, 7*dpi);
			var i = 0;
			for (var word in b.words(currentText))
			{
				if (word in json.versions[v].pages[p].glossary)
				{
					b.typo(b.words(currentText)[i], 'appliedFont', 'Gadugi\tBold');
				}
				i++;
			}
 
			for (var w = 0; w &lt; json.versions[v].pages[p].glossary.length; w++)
			{
 
				wSum++;
				var x = 3*dpi + 2*dpi*b.sin(wSum/3);
				var y = b.map(w, 0, json.versions[v].pages[p].glossary.length, 1.2*dpi, 8.2*dpi);
				b.noFill();
				var wrapper = b.ellipse(x, y, 0.9*dpi+0.2*dpi*b.sin(wSum/4), 0.9*dpi+0.2*dpi*b.sin(wSum/4));
				wrapper.textWrapPreferences.textWrapMode = TextWrapModes.CONTOUR;
				var circle = b.ellipse(x, y, 0.75*dpi+0.2*dpi*b.sin(wSum/4), 0.75*dpi+0.2*dpi*b.sin(wSum/4));
				try {
					var imgCircle = b.image("FaceImgBgs/" + Math.floor(b.random(73)) + ".jpg", circle);
				}
				catch(error) {
					b.fill(b.random(10,70));
					b.ellipse(x, y, 0.75*dpi+0.2*dpi*b.sin(wSum/4), 0.75*dpi+0.2*dpi*b.sin(wSum/4));
				}
				b.fill(255,255,255);
				b.textAlign(Justification.CENTER_ALIGN);
				var myText = b.text(json.versions[v].pages[p].glossary[w],x-1*dpi,y-0.04*dpi,2*dpi,0.5*dpi);
				myText.textFramePreferences.ignoreWrap = true;
				b.textAlign(Justification.LEFT_ALIGN);
			}
			if (v == 0) 
			{
				if (p &lt; json.versions[v].pages.length - 1){b.addPage();}
			}
			else
			{
				if (p &lt; json.versions[v].pages.length - 1){b.page(p+2)}
			}
		}
		//if (v &lt; 10){ b.savePDF("0" + v + "_dinkolas.pdf", false);}
		//else { b.savePDF(v + "_dinkolas.pdf", false);}
 
	}
}
 
 
 
b.go();

 

shuann-book

URL to ZIP: https://drive.google.com/file/d/1RQSg3IoQqV_9MWc9p3Vng2feqgH-IsUT/view?usp=sharing

Title: A Guide to Absurd Movies

One sentence description: Explore absolutely absurd movies with this chapter. Yet remember the biggest challenge is where to find them.

I first had the idea of making a recipe book for creating new movies. For the ingredients, I would just throw in some random director, actors, writers ect., and then you will generate a random plot to go along with it. However, as I discovered some resources online containing real user reviews I wanted to include that as well. Thus, eventually, my chapter become something like a review book on non-existing movies: it is the interplay between the fake and the real that I am personally very interested in.

The entire project is generated based on Markov chains because I have to have completely bazar plot lines which the computer think marks as possible with little intervention as possible (i.e. predefined grammar). However, also due to this technique that I am using, I had to spend hours and hours cleaning up the data that I was feeding into the program. For example, I had to change all the names of the male protagonists to Chris and those of the female protagonists to Jade just so that names that only appear in one movie would not break the Markov chain. I also had to do a lot of filtering so that the plot content is not too over the place that two generated sentences adjacent to each other make no sense.

you can see that here, too many names and proper nouns are breaking the reading experience.

A lot better with cleaned up data, though more needed to be done.

Also, to make sure the first sentence makes sense, I need to incorporate many validators so that, for example, the sentence will always contain a determiner, does not start with a conjunction, etc. Moreover, I also tried to make sure that the pronoun matches the sex of the person that was named. Ex. "him" is changed into "her" because the determiner is identified as female in the first sentence.

Screenshots of the book pages: 

 var metaData;
var plot;
var posReview, negReview;
var titleTracker = 1;
var dirList = [];
var writerList = [];
var actorList = [];
var plotList = [];
 
var linesPlot = [];
var linesReview = [];
var markovPlot, x = 160, y = 240;
var markovPosReview, markovNegReview, x1 = 160, y1 = 200;
 
var bannedBeginner = ["He", "His", "Her", "She", "They", "Their", "We", "Another", "Suddenly", "However", "The next"];
var bannedMid = ["then", "as well", "also", "not only"];
var male = ["he", "Chris", "Chris's", "him", "male", "man", "his", "gentlemen", "boy", "himself"];
var female = ["she", "Jade", "Jade's", "her", "female", "woman", "her", "lady", "girl", "herself"];
 
var finalReivew = [];
var curDir = [];
var curActor = [];
var curWriter = [];
 
var movies = [];
 
function preload(){
  metaData = loadJSON("movieDetails.json");
 
  plot = loadStrings('illusion.txt');
  posReview = loadStrings('posReview.txt');
  negReview = loadStrings('negReview.txt');
}
 
function setup() {
  createCanvas(600, 600);
  textFont('times', 16);
  textAlign(LEFT);
 
  // create a markov model w' n=4
  markovTitle = new RiMarkov(2);
  markovPlot = new RiMarkov(3);
  markovPosReview = new RiMarkov(4);
  markovNegReview = new RiMarkov(4);
 
  // load text into the model
  markovTitle.loadText(plot.join(' '));
  markovPlot.loadText(plot.join(' '));
  markovPosReview.loadText(posReview.join(' '));
  markovNegReview.loadText(negReview.join(' '));
 
  extractInfo();
 
  for (var i = 0; i &lt; 15; i++){
    generate(i);
  }
 
  // Create a JSON Object, fill it with the Poems.
  var myJsonObject = {};
  myJsonObject.movies = movies;
  // console.log(myJsonObject);
 
  // Make a button. When you press it, it will save the JSON.
  createButton('SAVE POEMS BUTTON')
    .position(10, 10)
    .mousePressed(function() {
      saveJSON(myJsonObject, 'myPoems.json');
    });
}
 
function combineText(index) {
  var finalPlot = linesPlot.join(' ');
  for (var i = 0; i &lt; linesReview.length; i++){
    finalReivew[i] = linesReview[i].join(' ');
  }
 
  var title = "Absurd Moive No." + titleTracker;
  titleTracker += 1;
 
  movies[index] = new moiveInfo(title, curDir, curActor, curWriter, finalPlot, finalReivew);
  console.log(movies[index]);
}
 
function generate(index) {
  randBasicInfo();
 
  linesPlot = markovPlot.generateSentences(2);
  checkPlot();
  checkAgreement();
 
  for (var i = 0; i &lt; 3; i++){
    if (random() &lt; 0.7){
      linesReview[i] = markovNegReview.generateSentences(random([3, 4]));
    } else {
      linesReview[i] = markovPosReview.generateSentences(random([2, 3, 4]));
    }
  }
 
  combineText(index);
}
 
//generate basic movie setp info
function randBasicInfo(){
  curDir = [];
  curActor = [];
  curWriter = [];
  var dirNum = 1;
  var actNum = random([3, 4, 4]);
  var wriNum = random([1, 2, 2]);
 
  for (var a = 0; a &lt; dirNum; a++){
    curDir[a] = random(dirList);
  }
 
  for (var b = 0; b &lt; actNum; b++){
    curActor[b] = random(actorList);
  }
 
  for (var c = 0; c &lt; wriNum; c++){
    curWriter[c] = random(writerList);
  }
}
 
function checkPlot() {
  //get all words in the frist sentence
  var sentence1 = linesPlot[0].split(' ');
 
  //replace first sentence when it begins with pronoun
  if (checker(sentence1[0], bannedBeginner) || checkConjunction(sentence1[0])){
 
    linesPlot[0] = markovPlot.generateSentence();
    checkPlot();
  } else {
    //look at each word in the sentence
    for (var i = 0; i &lt; sentence1.length; i++){
      sentence1[i] = RiTa.trimPunctuation(sentence1[i]);
    }
    // console.log(sentence1);
    // console.log("here");
 
    // if (checker(sentence1, bannedMid) &amp;&amp; checker(sentence1, charNames) === false){
    //   console.log(linesPlot[0]);
    //   linesPlot[0] = markovPlot.generateSentence();
    //   console.log(linesPlot[0]);
    // }
 
    if (checker(sentence1, bannedMid)){
      linesPlot[0] = markovPlot.generateSentence();
      checkPlot();
    }
 
    if (checkDeterminer(sentence1) === false){
      linesPlot[0] = markovPlot.generateSentence();
      checkPlot();
    }
  }
}
 
// https://stackoverflow.com/questions/37428338/check-if-a-string-contains-any-element-of-an-array-in-javascript
function checker(value, ban) {
  for (var i = 0; i &lt; ban.length; i++) { if (value.indexOf(ban[i]) &gt; -1) {
      return true;
    }
  }
  return false;
}
 
function checkDeterminer(value){
  for (var a = 0; a &lt; value.length; a++){
    if (RiTa.getPosTags(value[a]) == "dt"){
      return true;
    }
  }
  return false;
}
 
function checkConjunction(value){
  var tag = RiTa.getPosTags(value);
  for (var i = 0; i &lt; tag.length; i++){
    if (tag[i] === "in" || tag[i] === "cc"){
      return true;
    }
  }
  return false;
}
 
function checkAgreement() {
  var sex = null;
  // sentence1Agg();
  // sentence2Agg();
 
  var sentence1 = linesPlot[0].split(' ');
  var sentence2 = linesPlot[1].split(' ');
 
  for (var i = 0; i &lt; sentence1.length; i++){ sentence1[i] = RiTa.trimPunctuation(sentence1[i]); if (sex === null){ if (male.indexOf(sentence1[i]) &gt; -1){
        sex = "M";
      }
      if (female.indexOf(sentence1[i]) &gt; -1){
        sex = "F";
      }
    }
  }
 
  for (var a = 0; a &lt; sentence1.length; a++){ if (RiTa.getPosTags(sentence1[a]) == "prp"){ if (male.indexOf(sentence1[a]) &gt; -1 &amp;&amp; sex === "F"){
        var index = male.indexOf(sentence1[a]);
        sentence1[a] = female[index];
      }
 
      if (female.indexOf(sentence1[a]) &gt; -1 &amp;&amp; sex === "M"){
        var index = male.indexOf(sentence1[a]);
        sentence1[a] = male[index];
      }
    }
  }
 
  linesPlot[0] = sentence1.join(' ') + ".";
}
 
function moiveInfo(title, dir, actor, writer, plot, reviews){
  this.title = title;
  this.direc = dir;
  this.actor = actor;
  this.writer = writer;
  this.plot = plot;
  this.reviews = reviews;
}
 
function extractInfo(){
  console.log(metaData.results.length);
  for (var i = 0; i &lt; metaData.results.length; i++){ if (metaData.results[i].imdb.rating &gt;= 8){
      //extract the names of directors on file for all movies with imdb rating &gt; 8
      if (metaData.results[i].director != null){
        if (metaData.results[i].director.length &gt; 1){
          var dirArray = metaData.results[i].director.split(', ');
          for (var a = 0; a &lt; dirArray.length; a++){ append(dirList, dirArray[a]); } } else { var dirArray = metaData.results[i].director; append(dirList, dirArray); } } if (metaData.results[i].writers != null){ if (metaData.results[i].writers.length &gt;= 1){
          for (var b = 0; b &lt; metaData.results[i].writers.length; b++){ append(writerList, metaData.results[i].writers[b]); } } } if (metaData.results[i].actors != null){ if (metaData.results[i].actors.length &gt;= 1){
          for (var c = 0; c &lt; metaData.results[i].actors.length; c++){
            append(actorList, metaData.results[i].actors[c]);
          }
        }
      }
    }
  }
}

Sources:

review.json from: https://github.com/Edmond-Wu/Movie-Reviews-Sentiments

Plot summary dataset from: http://www.cs.cmu.edu/~ark/personas/

Example movie dataset from: https://docs.mongodb.com/charts/master/tutorial/movie-details/prereqs-and-import-data/

chewie-book

High Stakes -  A real account of what's happening in poker.
archive

When I was 6, every morning while I was eating cereal I would have a big Calvin and Hobbes book open in front of me. I never read it out of a newspaper, having to wait a whole day before the next little morsel, but we had these collection books full of strips that I could blow through like binge watching Netflix. There's no doubt a running narrative between the strips (some even continuing the same event) but each also exists as its own distinct story, like a joke in a comedy routine. This is why in researching and developing ideas for this project I was excited to find an online database containing plot descriptions of every published Calvin and Hobbes comic strip.

Thinking about the different uses of these texts, I was wondering what Calvin and Hobbes was at its lowest level and the types of events that transpired in the strip. Calvin is relentlessly true to himself and his beliefs despite the pressure he faces from his peers and superiors to act "normal": social rejection, being grounded, and getting scolded by his teachers to name a few. There's something admirable about the willingness to believe in yourself to that extent, but it also causes a great deal of inefficiency when you refuse to think based on observation or even speculation and only perpetuating and expanding on your existing ideas.

This is almost the complete opposite of some thoughts I've had about poker.

In this relentless game you are restricted to a finite set of actions, and (at least at the professional level) if you aren't able to make the most efficient set of actions based on the changing state of the game and the mechanisms of probability, you lose with no second chance. Because these two worlds are so contradictory I thought it would be interesting and amusing to combine references to both in these short, narrative descriptions.

I decided to use the texts by replacing the main characters with popular poker players, and replacing some of the Calvin and Hobbes language with poker terms. These terms were collected from an article describing all of the rules for playing no-limit Texas hold 'em. The results were interesting and at times amusing, although they definitely weren't completely coherent.

For the background images, my program went through each text, added each instance of a players name to a list and then used those frequency indices to select which player to show a picture of in the background. The front and back cover pages  are illustrations from one of the gorgeous full-color strips released on Sundays.

 

Code in Java for ripping summaries of every Calvin and Hobbes comic to a .txt file.

PrintWriter output;
 
 
void setup() {
  output = createWriter("positions.txt");
  String t;
  int max = 3150;
 
  output.println("{");
 
  for (int i=1; i&lt;max+1; i++) {
    println(i);
    t = getJist(i);
    t = t.replace("\"","'");
 
    output.print("\""+str(i)+"\" : \"");
    output.print(t);
 
    output.print("\"");
    if (i&lt;max) output.print(",");
    output.println("");
  }
  output.println("}");
  output.close();
}
 
String getJist(int n) {
  if ((n&lt;1)||(n&gt;3150)) return "invalid index";
  else {
    String[]t;
    t = loadStrings("http://www.reemst.com/calvin_and_hobbes/stripsearch?q=butt&amp;search=normal&amp;start=0&amp;details="+str(n));
 
    String line = t[56];
    t = line.split(" "); line = t[1]; 
    return line; 
  }
}

Code in Javascript for modifying the texts and outputting to .json:

var t,p,corp, rm;
 
var availPos = ["nns","nn","jj"];//,"vbg","vbn","vb","vbz","vbp"];
 
 
 
var corp2 = {
  "nns": [],
  "nn": [],
  "jj": [],
  "vbg": [],
  "vbn": [],
  "vb": [],
  "vbz": [],
  "vbp": [],
  "rb": []
}
 
var nns = [];
var nn = [];
var jj = [];
var vbg = [];
var vbn = [];
var vb = [];
var vbz = [];
var vbp = [];
var rb = [];
 
 
 
 
var reps = [
  ["Calvin","Negreanu"],
  ["Hobbes","Dwan"],
  ["Mom","Selbst"],
  ["Dad","Ivey"],
  ["Susie", "Tilly"],
  ["Christmas", "WSOP"],
  ["parent", "sponsor"],
  ["Parent", "Sponsor"],
  ["Tiger", "Dealer"],
  ["tiger", "dealer"],
  ];
 
function preload() {
  t = loadJSON("jists.json");
  p = loadStrings("poker.txt");
}
 
function setup() {
  createCanvas(400, 400);
  //print("okay");
  loadToks();
  var texts = [];
  var fake;
  for (var i=0; i&lt;3000; i++) { ttt = doer(int(random(3150))); fake = new RiString(ttt); fake.replaceAll("\"", ""); fake.replaceAll(" ,", ","); ttt = fake._text; texts.push(ttt); } var ret = {"a":texts}; saveJSON(ret,"all.json"); //print(availPos); } function draw() { background(220); } function doer(n) { var j = RiString(t[n]); for (var i in reps) { j.replaceAll(reps[i][0], reps[i][1]); } //print(j); return advRep(j._text); } function loadToks() { var movie = join(p," "); rm = new RiMarkov(3); var toks = movie.split(" "); var om,rs; var tooks = []; for (var i in toks) { if (toks[i].length&gt;3) {
    	om = split(split(split(toks[i],".")[0],",")[0],"?")[0];
      //if (RiTa.isNoun(om)&amp;&amp; !(RiTa.isAdjective(om))) {
      rs = new RiString(om);
      rs = rs.replaceAll("(","");
      rs = rs.replaceAll(")","");
      rs = rs.replaceAll(":","");
      rs = rs.toLowerCase();
      rs = rs.trim();
    	var ppp = RiTa.getPosTags(rs._text)[0];
      if (availPos.indexOf(ppp)!=-1 ) {
        tooks.push(rs._text);
        corp2[ppp].push(rs._text);
      }
 
    }
    //print(toks[i]);
  }
  //print(corp2)
  //saveJSON(corp2,"corp2.json");
  rm.loadTokens(tooks);
}
 
function advRep(s) {
  var poss = RiTa.getPosTags(s);
  var toks = RiTa.tokenize(s);
  var stringy = "";
  var randInt;
  for (var i in toks) {
    if (availPos.indexOf(poss[i])!=-1 &amp;&amp; int(random(3))==0) {
      randInt = int(random(corp2[poss[i]].length))
      if (!(RiTa.isVerb(toks[i])) || poss[i]=="vbg") {
        for (var j in poss) {
          if (toks[j] == toks[i]) toks[j] = corp2[poss[i]][randInt];
        }
      }
      else {
        for (var j in poss) {
          if (toks[j] == toks[i]) toks[j] = corp2[poss[i]][randInt];
        }
      }
 
    }
  }
  var stringgg = new RiString(join(toks, " "));
  stringgg.replaceAll(" .", ".");
  return str(stringgg._text);
 
}

Basil.js code:

#include "../../bundle/basil.js";
 
// Version for basil.js v.1.1.0
// Load a data file containing your book's content. This is expected
// to be located in the "data" folder adjacent to your .indd and .jsx.
// In this example (an alphabet book), our data file looks like:
// [
//    {
//      "title": "A",
//      "image": "a.jpg",
//      "caption": "Ant"
//    }
// ]
var jsonString;
var jsonData;
var text = ["*here is where I included the quotes*"];
];
 
//--------------------------------------------------------
function setup() {
  var randSeed = 2892;
  while (b.pageCount()&gt;=2) b.removePage();
 
  // Load the jsonString.
  jsonString = b.loadString("lines.json");
 
  // Clear the document at the very start.
  b.clear (b.doc());
 
 
  var imageX = 72*1.5;
  var imageY = 72;
  var imageW = 72*4.5;
  var imageH = 72*4.5;
  var anImageFilename = "images/front.jpg";
 
  var anImage = b.image(anImageFilename, 35, 35, 432-35*2, 648-35*2);
  anImage.fit(FitOptions.FILL_PROPORTIONALLY);
 
  // Make a title page.
  b.fill(244, 215, 66);
  b.textSize(48);
  b.textFont("Calvin and Hobbes","Normal");
  b.textAlign(Justification.LEFT_ALIGN);
  b.text("CHEWIE", 60,540,360,100);
 
 
  // Parse the JSON file into the jsonData array
  jsonData = b.JSON.decode( jsonString );
  b.println(jsonData);
 
 
  // Initialize some variables for element placement positions.
  // Remember that the units are "points", 72 points = 1 inch.
  var titleX = 195;
  var titleY = 0;
  var titleW = 200;
  var titleH = 600;
 
  var captionX = 72;
  var captionY = b.height - 108;
  var captionW = b.width-144;
  var captionH = 36;
 
  var txt, tok, max;
 
  var just;
  var names = ["n"];
  // Loop over every element of the book content array
  // (Here assumed to be separate pages)
 
  for (var i = 0; i &lt; 9; i++) { // Create the next page. b.addPage(); txt = text[randSeed+i]; tok = b.splitTokens(txt," "); for (var j=tok.length; j&gt;=0; j--) {
      if (tok[j] === "Dwan"){
        names.push("d");
      }
      if (tok[j] === "Hellmuth") {
        names.push("h");
      }
      if (tok[j] === "Selbst") {
        names.push("s");
      }
      if (tok[j] === "Tilly") {
        names.push("t");
      }
      if (tok[j] === "Negreanu") {
        names.push("n");
      }
    }
 
    var ic = b.floor(b.random(0,names.length));
 
    ic = names[ic];
    names = [];
    if (ic == "d") max=6;
    if (ic == "h") max=7;
    if (ic == "i") max=11;
    if (ic == "n") max=9;
    if (ic == "s") max=6;
    if (ic == "t") max=3;
 
    anImageFilename = "images/"+ic + (i%max+1)+".jpg";
 
    // Load an image from the "images" folder inside the data folder;
    // Display the image in a large frame, resize it as necessary.
 // no border around image, please
 
 
 
    anImage = b.image(anImageFilename, 0, 0, 432, 648);
    anImage.fit(FitOptions.FILL_PROPORTIONALLY);
    b.opacity(anImage,70);
    if (i%2==0) {
      titleX = 50;
      just = Justification.LEFT_ALIGN;
    } else {
      titleX = 190;
      just = Justification.RIGHT_ALIGN;
    }
 
    b.textSize(16);
    b.fill(0);
 
    b.textFont("DIN Alternate","Bold");
    b.textAlign(just, VerticalJustification.BOTTOM_ALIGN  );
    var ttp = b.text(txt, titleX,titleY,titleW,titleH);
 
    // Create textframes for the "caption" fields
    b.fill(0);
    b.textSize(36);
    b.textFont("Helvetica","Regular");
    b.textAlign(Justification.LEFT_ALIGN, VerticalJustification.TOP_ALIGN );
    //b.text(jsonData."first"[0].caption, captionX,captionY,captionW,captionH);
 
  };
  b.addPage();
  imageX = 72*1.5;
  imageY = 72;
  imageW = 72*4.5;
  imageH = 72*4.5;
  anImageFilename = "images/back.jpg";
 
  anImage = b.image(anImageFilename, 35, 35, 432-35*2, 648-35*2);
  anImage.fit(FitOptions.FILL_PROPORTIONALLY);
}
 
// This makes it all happen:
b.go();

paukparl-book

Generated Self-Help Books

These are generated book covers (front and back) of self-help books, each with a different instruction on how to live your life.

 

Process

Lately I've been consumed by an unhealthy obsession with how to lead my life and how to start my career. Since we were making a book, I wanted it to make be about me. I hoped to resolve some of my issues through introspection.

During the process, I was somehow reminded of the sheer number of self-help books out there that instruct you on how to live your life. When you see too  many of them sometimes, you are made to think that it is that important to live your life to your fullest, when it just as well might not be that important. My book was an attempt to emulate, and thereby mock, these self-help books

I based most of my word selections on Text Corpora and used minimal code from RiTa.js library. For example, the title was drawn from corpora/data/words/common.json directory with a few filters such as (!RiTa.isNoun()). I also made a few template strings for subtitles, and a few arrays of words related to success. I think I could have hidden the repeating pattern and made more clever and controlled title-subtitle matches by using the Wordnik API. But there are still some examples that show the script is doing its job.

google drive link to 25 instances

 

Code

Some snippets of code showing functions for title and subtitle generation, and where I got the data.

adv = ['conveniently', 'quickly', 'certainly', 'effortlessly', 'immediately', 'completely']
adj = ['convenient', 'undisputed', 'quick', 'secret', 'true', 'groundbreaking', 'revolutionary']
obj = ['success', 'a successful career', 'true happiness', 'a new life', 'a healthier lifestyle']
v = ['succeed', 'win', 'challenge yourself', 'be successful', 'achieve success', 'get ahead in life', 'be famous', 'be happy', 'lose weight', 'make your dreams come true', 'plan ahead', 'make more friends']
 
function preload() {
    firstNames = loadJSON('/corpora-master/data/humans/firstNames.json');
    lastNames = loadJSON('/corpora-master/data/humans/authors.json');
    common = loadJSON('/corpora-master/data/words/common.json');
    adverbs = loadJSON('/corpora-master/data/words/adverbs.json');
    adjectives = loadJSON('/corpora-master/data/words/encouraging_words.json');
    newspapers = loadJSON('/corpora-master/data/corporations/newspapers.json');
}
 
...
...
...
 
function genTitle() {
    var temp = common[floor(random(common.length))];
    if (RiTa.isNoun(temp) || RiTa.isAdjective(temp) || RiTa.isAdverb(temp)
        || !RiTa.isVerb(temp) || temp.length &gt;7) return genTitle();
    else return temp;
}
 
function randomDraw(array) {
    return array[floor(random(array.length))];
}
 
function genSubtitle() {
    temp = random(10);
    str;
    if (temp&lt;2.3) str = 'How to ' + randomDraw(adv) + ' ' + randomDraw(v) + ' and ' + randomDraw(v);
    else if (temp&lt;4.6) str = 'A ' + floor(random(12)+3) + '-step guide to ' + randomDraw(obj);
    else if (temp&lt;6.6) str = floor(random(9)+3) + ' ways to ' + randomDraw(adv) + ' ' + randomDraw(v);
    else if (temp&lt;8.6) str = 'A ' + randomDraw(adj) + ' guide on how to ' + randomDraw(v);
    else str = 'If only I had known then these ' + floor(random(12)+3) + ' facts of life';
 
    return str;
}
 
function genAuthor() {
        gen = {};
        gen['books'] = [];
        gen['author'] = firstNames[floor(random(firstNames.length))] 
                + ' ' + lastNames[floor(random(lastNames.length))];
 
        for (let i=0; i&lt;8; i++) {
            book = {};
            book['title'] = genTitle();
            book['subtitle'] = genSubtitle();
 
            book['quotes'] = [];
            book['reviews'] = [];
            for (let j=0; j&lt;5; j++) {
            review = {}
            temp = random(4);
            var reviewString;
            if (temp&lt;1.2) reviewString = adverbs[floor(random(adjectives.length))] + ' ' + adjectives[floor(random(adjectives.length))] + ' and ' + adjectives[floor(random(adjectives.length))] +'.';
            else if (temp&lt;2.0) reviewString = adverbs[floor(random(adjectives.length))] + ' ' + adjectives[floor(random(adjectives.length))] + ' but ' + adjectives[floor(random(adjectives.length))] +'.';
            else if (temp&lt;2.8) reviewString = adverbs[floor(random(adjectives.length))] + ' ' + adjectives[floor(random(adjectives.length))] +'!';
            else if (temp&lt;3.5) reviewString = adverbs[floor(random(adjectives.length))] + ' ' + adjectives[floor(random(adjectives.length))] +'.';
            else reviewString = adjectives[floor(random(adjectives.length))] + '...'
            review['text'] = uppercase(reviewString);
            review['by'] = newspapers[floor(random(8, newspapers.length))];
            book['reviews'].push(review);
            }
 
            book['price'] = '$' + floor(random(7, 11)) + '.50' // 6 &lt;&gt; 10
            gen['books'].push(book);
        }
        saveJSON(gen, 'book'+ n +'.json');
}

airsun-book

Link of the zip file:https://drive.google.com/file/d/1vK9PlB3Dhnp6-Pvgipe16i5Mij0k40fH/view?usp=sharing

Link of the sample chapter: https://drive.google.com/file/d/1wINmM0CPXhITyv4fDxDD3slvJv1umwXm/view?usp=sharing

Title: Value of Advertisements

Short Description: In today's industry, advertising is actually much less effective than people have historically thought. This work, by using the AFINN ranking of words, challenges the traditional value of advertisement and its effect in building product's quality image.

Long Description: In today's industry, advertising is actually much less effective than people have historically thought. A research paper "The effect of advertising on brand awareness and perceived quality: An empirical investigation using panel data" done by C.R. Clark, Ulrich Doraszelski (University of Pennsylvania), and Michaela Draganska, establishes the finding that advertising has "no significant effect on perceived quality" of a brand. The use of advertisement does build a significant positive effect on brand awareness, but details in the advertisement do not change people's quality perception. Therefore, looking at the most recent products by Apple, an experiment of changing the advertisements from its official website with different descriptive words (all positive, but differs in the levels of positivity) is conducted. Through the process, I am wondering about the differences between the randomly generated results and the original transcript. From the generated advertisement, will people really pay attention to new descriptive words? If so, to what extent? If not, then I am curious about whether computing generated word will be able to replace the traditional advertisement.

To discover this effect, I first copied down and cleaned the set of advertisements from Apple's official websites. Then, I categorized the list of AFINN ranking of words by the level of positivity and part of speech (noun/verb/adj/adv). After that, I replaced descriptive words from the original ads with the AFINN ranking word with the proper capitalization and punctuation to make things "make sense". An example of advertisements for iPhone Xs:

Here is an example of the advertisement for the AirPods:

Moreover, to make the advertisement resembles Apple's actual advertisement, I studied typography, the choice of images, layout and color scheme of their official website. From one of their advertisement videos, I got photos that represent the product and randomly assigned them to each advertisement.

Here are some inspirational layout and photos used for designing the final pages:

I found that most sentences are overlayed on a picture. Therefore, I started to find pictures that Apple often uses.

Embedded Code:

var rhymes, word, data;
 
var afinn;
 
function preload() {
  afinn = loadJSON('afinn111.json');
  script = loadJSON('description.json');
}
 
 
var detect_words = [];
var detect_values = [];
var currentlevel;
//for general category
var pos_verbs = [];
var neg_verbs = [];
var pos_adj = [];
var neg_adj = [];
var pos_adv = [];
var neg_adv = [];
//for high category
var pos_adj_high = [];
var pos_adv_high = [];
var pos_verbs_high = [];
//for low category 
var pos_adj_low = [];
var pos_adv_low = [];
var pos_verbs_low = [];
 
var original_ary1 = [];
var original_ary2 = [];
var original_ad;
var original_ad_ary;
var output = [];
var output1 = [];
var output2 = [];
var currentT1 = [];
var currentT2 = [];
var currentT3 = [];
var currentT4 = [];
var currentT5 = [];
var currentT6 = [];
var currentT7 = [];
var currentT8 = [];
var currentT9 = [];
var phonecontent;
var imagecontent;
// var page = {phone: phonecontent, image: imagecontent}
// var page2 = {phone: phonecontent, image: imagecontent}
// var page3 = {phone: phonecontent, image: imagecontent}
var final_array = [];
 
 
//creating the larger 2d array
var pagearray1 = [];
var pagearray2 = [];
 
 
 
function setup(){
  createCanvas(800, 1000);
  fill(255);
  textFont("Georgia");
 
  var txt = afinn;
  var currentWord;
 
  detect_words = Object.getOwnPropertyNames(txt);
 	detect_values = Object.values(txt);
//AirPods
//IphoneXs
  original_ary1 = script.IphoneXs;
  //print(original_ary1)
  produceP(original_ary1, pagearray1);
  //original_ary2 = script.AirPods;
  //produceP(original_ary2, pagearray2);
  print("array1", pagearray1)
  //print("array2", pagearray2)
 
}
 
function produceP(original_ary, pagearray){
  //print(original_ary)
  //this for loop is used to get the positive and negative verbs for the advertisement
  for (var i = 0; i &lt; detect_values.length; i ++){ currentWord = detect_words [i].replace(/\s+/g, ''); if (detect_values[i] &gt; 0){
      if (RiTa.isVerb(currentWord)) {
      	pos_verbs.push(currentWord)
    	} 
      if (RiTa.isAdjective(currentWord)){
        pos_adj.push(currentWord)
      }
      if (RiTa.isAdverb(currentWord)){
        pos_adv.push(currentWord)
      }
    } if  (detect_values[i] &lt;= 5 &amp;&amp; detect_values[i] &gt;= 3){
      if (RiTa.isVerb(currentWord)) {
      	pos_verbs_high.push(currentWord)
    	} 
      if (RiTa.isAdjective(currentWord)){
        pos_adj_high.push(currentWord)
      }
      if (RiTa.isAdverb(currentWord)){
        pos_adv_high.push(currentWord)
      }
    } if  (detect_values[i] &lt; 3 &amp;&amp; detect_values[i] &gt;= 0){
      if (RiTa.isVerb(currentWord)) {
      	pos_verbs_low.push(currentWord)
    	} 
      if (RiTa.isAdjective(currentWord)){
        pos_adj_low.push(currentWord)
      }
      if (RiTa.isAdverb(currentWord)){
        pos_adv_low.push(currentWord)
      }
    }
  }
 
  // print("verbs", pos_verbs_5)
  // print("adj", pos_adj_5)
  // print("adv",pos_adv_5)
 
////////////////generating positivity words in general////////////
	for (var t = 0; t &lt; original_ary.length; t++){
    var newstring =new RiString (original_ary[t])
    var posset = newstring.pos()
    var wordset = newstring.words()
    var original_ad_ary = original_ary[t].match(/\w+|\s+|[^\s\w]+/g)
    var current_content = "";
    for (var j = 0; j &lt; posset.length; j++){ // if the word is a verb if (/vb.*/.test(posset[j])){ // if the word is not is/are/be if (wordset[j] != "is" &amp;&amp; wordset[j] != "are" &amp;&amp; wordset[j] != "be"){ //running for all positive word var newverb = round(random(0, pos_verbs.length-1)); var track; if (j&gt;=2){
            if (wordset[j-1] == "." ){
              track = pos_verbs[str(newverb)][0].toUpperCase()+pos_verbs[str(newverb)].slice(1);
            }else{
              track = pos_verbs[str(newverb)];
            }
          }else{
            if (j==0){
            	track = pos_verbs[str(newverb)][0].toUpperCase()+pos_verbs[str(newverb)].slice(1);
            }else{
            	track = pos_verbs[str(newverb)];
            }
          }
          current_content += track;
        }else if (wordset[j] == "is"){
          current_content += "is";
        }else if (wordset[j] == "are"){
          current_content += "are";
        }else{
          current_content += "be";
        }
      }else{
        if (/jj.*/.test(posset[j])){
          var newadj = round(random(0, pos_adj.length-1));
          var track2;
          if (j&gt;=2){
            if (wordset[j-1] == "." ){
              track2 = pos_adj[str(newadj)][0].toUpperCase()+pos_adj[str(newadj)].slice(1);
            }else{
              track2 = pos_adj[str(newadj)];
            }
          }else{
            if (j==0){
            	track2 = pos_adj[str(newadj)][0].toUpperCase()+pos_adj[str(newadj)].slice(1);
            }else{
            	track2 = pos_adj[str(newadj)];
            }
          }
          current_content += track2;
        } else if (/rb.*/.test(posset[j])){
          var newadv = round(random(0, pos_adv.length-1));
          var track3;
          if (j&gt;=2){
            if (wordset[j-1] == "." ){
              track3 = pos_adv[str(newadv)][0].toUpperCase()+pos_adv[str(newadv)].slice(1);
            }else{
              track3 = pos_adv[str(newadv)];
            }
          }else{
            if (j==0){
            	track3 = pos_adv[str(newadv)][0].toUpperCase()+pos_adv[str(newadv)].slice(1);
            }else{
            	track3 = pos_adv[str(newadv)];
            }
          }
          current_content += track3;
        }else{
          current_content += wordset[j];
        }
      }
      if (wordset[j+1] != "." &amp;&amp; wordset[j+1] != "!" &amp;&amp; wordset[j+1] != "," &amp;&amp; wordset[j+1] != "?"){
      	current_content += " ";
      }
    }
    output.push(current_content);
    if (t&lt;2) { currentT1.push(current_content) }else if (t&gt;=2 &amp;&amp; t&lt;4){
      currentT2.push(current_content)
    }else{
      currentT3.push(current_content)
    }
  }
  append(pagearray, output);
 
////////////////generating high positivity words////////////
for (var t1 = 0; t1 &lt; original_ary.length; t1++){
    lexicon = new RiLexicon();
    var newstring1 =new RiString (original_ary[t1])
    var posset1 = newstring1.pos()
    var wordset1 = newstring1.words()
    var current_content1 = "";
    for (var j1 = 0; j1 &lt; posset1.length; j1++){ // getting the positive adj,adv,verbs //print("1", wordset1[j1]) // if the word is a verb if (/vb.*/.test(posset1[j1])){ //print("4",wordset1[j1]) // if the word is not is/are/be if (wordset1[j1] != "is" &amp;&amp; wordset1[j1] != "are" &amp;&amp; wordset1[j1] != "be"){ //print("**", wordset1[j1]) //running for all positive word var newverb1 = round(random(0, pos_verbs_high.length-1)); var trackb1; if (j1&gt;=2){
            //print("***", wordset1[j1], wordset1[j1-1])
            if (wordset1[j1-1] == "." ){
              //print("****", wordset1[j1])
              trackb1 = pos_verbs_high[str(newverb1)][0].toUpperCase()+pos_verbs_high[str(newverb1)].slice(1);
            }else{
              //print("*!", wordset1[j1])
              trackb1 = pos_verbs_high[str(newverb1)];
            }
          }else{
            if (j1==0){
            	trackb1 = pos_verbs_high[str(newverb1)][0].toUpperCase()+pos_verbs_high[str(newverb1)].slice(1);
            }else{
            	trackb1 = pos_verbs_high[str(newverb1)];
            }
          }
          current_content1 += trackb1;
        }else if (wordset1[j1] == "is"){
          current_content1 += "is";
        }else if (wordset1[j1] == "are"){
          current_content1 += "are";
        }else{
          current_content1 += "be";
        }
      }else{
        if (/jj.*/.test(posset1[j1])){
          //print("2",wordset1[j1])
          var newadj1 = round(random(0, pos_adj_high.length-1));
          var trackb2;
          if (j1&gt;=2){
            //print("2*", wordset1[j1], wordset1[j1-1])
            if (wordset1[j1-1] == "." ){
              //print("2**", wordset1[j1])
              trackb2 = pos_adj_high[str(newadj1)][0].toUpperCase()+pos_adj_high[str(newadj1)].slice(1);
            }else{
              //print("2!", wordset1[j1])
              trackb2 = pos_adj_high[str(newadj1)];
            }
          }else{
            if (j1==0){
              trackb2 = pos_adj_high[str(newadj1)][0].toUpperCase()+pos_adj_high[str(newadj1)].slice(1);
            }else{
             trackb2 = pos_adj_high[str(newadj1)];
            }
          }
          current_content1 += trackb2;
        } else if(/rb.*/.test(posset1[j1])){
          //print("3",wordset1[j1])
          var newadv1 = round(random(0, pos_adv_high.length-1));
          var trackb3;
          if (j1&gt;=2){
            //print("3*", wordset1[j1], wordset1[j1-1])
            if (wordset1[j1-1] == "." ){
              //print("3**", wordset1[j1])
              trackb3 = pos_adv_high[str(newadv1)][0].toUpperCase()+pos_adv_high[str(newadv1)].slice(1);
            }else{
              //print("3!", wordset1[j1])
              trackb3 = pos_adv_high[str(newadv1)];
            }
          }else{
            if (j1==0){
              trackb3 = pos_adv_high[str(newadv1)][0].toUpperCase()+pos_adv[str(newadv1)].slice(1);
            }else{
             trackb3 = pos_adv_high[str(newadv1)];
            }
          }
          current_content1 += trackb3;
        }else{
          //print("5",wordset1[j1])
          current_content1 += wordset1[j1];
        } 
      }
      if (wordset1[j1+1] != "." &amp;&amp; wordset1[j1+1] != "!" &amp;&amp; wordset1[j1+1] != "," &amp;&amp; wordset1[j1+1] != "?"){
        current_content1 += " ";
        //print("6",wordset1[j1]);
      }
    }
 
    output1.push(current_content1);
  	if (t1&lt;2) { currentT4.push(current_content1) }else if (t1&gt;=2 &amp;&amp; t1&lt;4){
      currentT5.push(current_content1)
    }else{
      currentT6.push(current_content1)
    }
}
 
	append(pagearray, output1);
  //print("arraybig",pagearray);
 
////////////////generating low positivity words////////////
for (var t2 = 0; t2 &lt; original_ary.length; t2++){
    lexicon = new RiLexicon();
    var newstring2 =new RiString (original_ary[t2])
    var posset2 = newstring2.pos()
    var wordset2 = newstring2.words()
    var current_content2 = "";
    for (var j2 = 0; j2 &lt; posset2.length; j2++){ // getting the positive adj,adv,verbs //print("1", wordset2[j2]) // if the word is a verb if (/vb.*/.test(posset2[j2])){ //print("4",wordset2[j2]) // if the word is not is/are/be if (wordset2[j2] != "is" &amp;&amp; wordset2[j2] != "are" &amp;&amp; wordset2[j2] != "be"){ //print("**", wordset2[j2]) //running for all positive word var newverb2 = round(random(0, pos_verbs_low.length-1)); var trackc1; if (j2&gt;=2){
            //print("***", wordset2[j2], wordset2[j2-1])
            if (wordset2[j2-1] == "." ){
              //print("****", wordset2[j2])
              trackc1 = pos_verbs_low[str(newverb2)][0].toUpperCase()+pos_verbs_low[str(newverb2)].slice(1);
            }else{
              //print("*!", wordset2[j2])
              trackc1 = pos_verbs_low[str(newverb2)];
            }
          }else{
            if (j2==0){
            	trackc1 = pos_verbs_low[str(newverb2)][0].toUpperCase()+pos_verbs_low[str(newverb2)].slice(1);
            }else{
            	trackc1 = pos_verbs_low[str(newverb2)];
            }
          }
          current_content2 += trackc1;
        }else if (wordset2[j2] == "is"){
          current_content2 += "is";
        }else if (wordset2[j2] == "are"){
          current_content2 += "are";
        }else{
          current_content2 += "be";
        }
      }else{
        if (/jj.*/.test(posset2[j2])){
          //print("2",wordset2[j2])
          var newadj2 = round(random(0, pos_adj_low.length-1));
          var trackc2;
          if (j2&gt;=2){
            //print("2*", wordset2[j2], wordset2[j2-1])
            if (wordset2[j2-1] == "." ){
              //print("2**", wordset2[j2])
              trackc2 = pos_adj_low[str(newadj2)][0].toUpperCase()+pos_adj_low[str(newadj2)].slice(1);
            }else{
              //print("2!", wordset2[j2])
              trackc2 = pos_adj_low[str(newadj2)];
            }
          }else{
            if (j2==0){
              trackc2 = pos_adj_low[str(newadj2)][0].toUpperCase()+pos_adj_low[str(newadj2)].slice(1);
            }else{
             trackc2 = pos_adj_low[str(newadj2)];
            }
          }
          current_content2 += trackc2;
        } else if(/rb.*/.test(posset2[j2])){
          //print("3",wordset2[j2])
          var newadv2 = round(random(0, pos_adv_low.length-1));
          var trackc3;
          if (j2&gt;=2){
            //print("3*", wordset2[j2], wordset2[j2-1])
            if (wordset2[j2-1] == "." ){
              //print("3**", wordset2[j2])
              trackc3 = pos_adv_low[str(newadv2)][0].toUpperCase()+pos_adv_low[str(newadv2)].slice(1);
            }else{
              //print("3!", wordset2[j2])
              trackc3 = pos_adv_low[str(newadv2)];
            }
          }else{
            if (j2==0){
              trackc3 = pos_adv_low[str(newadv2)][0].toUpperCase()+pos_adv_low[str(newadv2)].slice(1);
            }else{
             trackc3 = pos_adv_low[str(newadv2)];
            }
          }
          current_content2 += trackc3;
        }else{
          //print("5",wordset2[j2])
          current_content2 += wordset2[j2];
        } 
      }
      if (wordset2[j2+1] != "." &amp;&amp; wordset2[j2+1] != "!" &amp;&amp; wordset2[j2+1] != "," &amp;&amp; wordset2[j2+1] != "?"){
        current_content2 += " ";
        //print("6",wordset2[j2]);
      }
    }
 
    output2.push(current_content2);
  	if (t2&lt;2) { currentT7.push(current_content2) }else if (t2&gt;=2 &amp;&amp; t2&lt;4){
      currentT8.push(current_content2)
    }else{
      currentT9.push(current_content2)
    }
}
  //append(pagearray, output2);
  for (var h = 1; h &lt; 10; h++){
    if (h&lt;4){ currentlevel = "Generally Positive" }else if (h&gt;=4 &amp;&amp; h &lt; 7){
      currentlevel = "Highly Positive"
    }else {
      currentlevel = "Barely Positive"
    }
    var page = {level: currentlevel, phone: window["currentT"+str(h)], image: str("random"+round(random(1,20))+".jpg")}
    //print(page)
    final_array.push(page);
  }
 
  // print("hi")
  // page1.phone = currentT1;
  // page+str(h).image = str("random"+round(random(1,20))+".jpg")
  // // page2.phone = currentT2;
  // // page2.image = str("random"+round(random(1,20))+".jpg")
  // // page3.phone = currentT3;
  // // page3.image = str("random"+round(random(1,20))+".jpg")
  // final_array.push(page+str(h))
  // // final_array.push(page2)
  // // final_array.push(page3)
  // }
 
 
}
 
function draw()
{
  background(100,0,100);
  textAlign(RIGHT);
  textSize(36);
  //text(output, 280, 40);
 
  textAlign(LEFT);
  textSize(10);
  textLeading(14);
  text(output, 30, 20, 500, 1000);
  text(output1, 30, 220, 500, 1000);
  text(output2, 30, 420, 500, 1000);
 
  createButton('SAVE POEMS BUTTON')
    .position(10, 10)
    .mousePressed(function() {
      saveJSON(final_array, 'test.json');
  });
 
}

 

sepho-book

Title: Exploration?

Description: A generated log/diary of a sailor/explorer.

Zip: https://drive.google.com/open?id=1TessDSHxPt30Q2uxnF5K0VLUhlkLHVRu

Example Chapter: https://drive.google.com/open?id=1u6jQYzeuN5EQFFpsqIsgdtsZk0qZzJu6