diff --git a/README.md b/README.md
index b4595b7a4a59be65e6a176c5fc607e2928f77833..929316de851b987a47d6691a1e263bcc6f8bc656 100644
--- a/README.md
+++ b/README.md
@@ -1,32 +1,36 @@
 # German sentiment/affect analysis of historical satire-periodical "Die Bombe"
 
 ## Description
-This projects focuses on performing an affect analysis of the historical satiric-periodical "Die Bombe", which was published in vienna between 1871 and 1925. The provided scripts concern:
 
-    1. Data Cleaning of the OCR (provided by ANNO - ÖNB)
-    2. Affect Analysis: Classification of emotions in the text using the lexicon-based approach
-    3. Creating a Visualization (line chart) of the results
+This projects focuses on performing an affect analysis of the historical-satirical periodical "Die Bombe", which was published in Vienna between 1871 and 1925. The provided scripts concern:
+
+1. Data Cleaning of the OCR (provided by ANNO - ÖNB)
+2. Affect Analysis: Classification of emotions in the text using the lexicon-based approach
+3. Creating a Visualization (line chart) of the results
     
-This project will focus on the periodical "Die Bombe", however the scripts as well as the dictionaries and emotion-lexicons shall provide a starting point for the affect analysis of other digitalized newspapers and different texts in german. 
+This project will focus on the periodical "Die Bombe", however the scripts as well as the dictionaries and emotion lexicons shall provide a starting point for the affect analysis of other digitalized newspapers and different texts in German. 
 
-Affect Analysis is a Subcategory of Sentiment Analysis, which is part of NLP (Natural Langauge Processing). While NLP concerns the way computerprograms are able to process human language, sentiment analysis examines the sentiment (positive, negative, neutral) which is contained in a text. Affect analysis is even more detailed than sentiment analysis - it focuses on determining concrete emotions like anger or sadness in texts. To achieve this goal, I used the lexicon-based approach of affect analysis, which uses dictionaries for different emotions containing words that are associated with them. 
+Affect Analysis is a subcategory of Sentiment Analysis, which is part of NLP (Natural Langauge Processing). While NLP concerns the way computer programs are able to process human language, sentiment analysis examines the sentiment (positive, negative, neutral) which is contained in a text. Affect analysis is even more detailed than sentiment analysis - it focuses on determining concrete emotions like anger or sadness in texts. To achieve this goal, I used the lexicon-based approach of affect analysis, which uses dictionaries for different emotions containing words that are associated with them. 
 
 ## Procedure
-All notebooks which were created and used for the affect analysis are available on this Gitlab. 
-Certain intermediate steps are not in the code anymore, as they are not relevant for the final analysis. These steps contained: using panda dataframes to receive the most frequently occuring words that were either classified as spelled wrong by the dictionary or did not occur in the emotion-lists. If the word was simply written with the spelling rules of that time and place, I put them in the dictionary. Same goes for words that were clearly associated with an emotion, but were not included in a list.
 
-The data was cleaned using the python library [nltk](https://www.nltk.org/). For the spellchecking I used the spellchecker [spylls](https://github.com/zverok/spylls) which is based on the hunspell dictionary and ported to python. The dictionaries being used for the spellchecking were attained from [LibreOffice/dictionaries](https://github.com/LibreOffice/dictionaries) and were then modified to better adjust to the spelling rules or variations of austria from the late 19./early 20. c.. The spelling variations were added because the german-speaking area did not yet have standardized spelling rules, which led to certain variations which were removed over time, especially because of the orthographic conference of 1901. For the POS-tagging the [Tiger-Corpus](https://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger/) of the University of Stuttgart has been used. For researchers who want to use the Corpus a license has to be affirmed on the website for further usage. (Version used in this project: tiger_release_aug07.corrected.16012013.conll09)  For the lemmatization [GermaLemma](https://github.com/WZBSocialScienceCenter/germalemma) was used.
+All notebooks which were created and used for the affect analysis are available on this Gitlab repository. 
+Certain intermediate steps are not in the code anymore, as they are not relevant for the final analysis. These steps contained: using panda dataframes to receive the most frequently occuring words that were either classified as spelled wrongly by the dictionary or did not occur in the emotion lists. If the word was simply written with the spelling rules of that time and place, I put them in the dictionary. Same goes for words that were clearly associated with an emotion, but were not included in a list.
+
+The data was cleaned using the Python library [nltk](https://www.nltk.org/). For the spellchecking I used the spellchecker [spylls](https://github.com/zverok/spylls) which is based on the hunspell dictionary and ported to Python. The dictionaries being used for the spellchecking were attained from [LibreOffice/dictionaries](https://github.com/LibreOffice/dictionaries) and were then modified to better adjust to the spelling rules or variations of Austria from the late 19th / early 20th century. The spelling variations were added because the German-speaking area did not yet have standardized spelling rules, which led to certain variations which were removed over time, especially because of the orthographic conference of 1901. For the POS tagging the [Tiger Corpus](https://www.ims.uni-stuttgart.de/forschung/ressourcen/korpora/tiger/) of the University of Stuttgart has been used. For researchers who want to use the corpus a license has to be affirmed on the website for further usage. (Version used in this project: `tiger_release_aug07.corrected.16012013.conll09`) For the lemmatization [GermaLemma](https://github.com/WZBSocialScienceCenter/germalemma) was used.
 
-For the affect analysis itself I used the dictionaries of Dr. Roman Klinger from his project from 2016 and the EmoLex of Dr. Saif Mohammad created via crowdsourcing in 2013. I then analyzed the cleaned data with the combination of these dictionaries by comparing the amount of words that could be sorted to certain emotions. The dictionaries of Dr. Klinger et al. are contained in this Gitlab as well as the revised dictionaries containing words which were added from "Die Bombe" after the first run-though of the analysis. the dictionary of Dr. Mohammad is not in this GitLab but can be optained for free for research- and educational uses on his website: [NRC-Emotion-Lexicon](https://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm). Dr. Klingers dictionaries for lexicon-based affect analysis can be viewed [here](https://bitbucket.org/rklinger/german-emotion-dictionary/src/master/). Even though for my analysis the dictionaries have been combined, I removed the words which could only be found in Dr. Mohammads dictionary. For researchers who also want to make use of it, they can obtain it from his website for the prior named uses.
+For the affect analysis itself I used the dictionaries of Dr. Roman Klinger from his project from 2016 and the EmoLex of Dr. Saif Mohammad created via crowdsourcing in 2013. I then analyzed the cleaned data with the combination of these dictionaries by comparing the amount of words that could be assigned to certain emotions. The dictionaries of Dr. Klinger et al. are contained in this Gitlab repository as well as the revised dictionaries containing words which were added from "Die Bombe" after the first run of the analysis. The dictionary of Dr. Mohammad is not in this repo but can be optained for free for research and educational uses on his website: [NRC-Emotion-Lexicon](https://saifmohammad.com/WebPages/NRC-Emotion-Lexicon.htm). Dr. Klinger's dictionaries for lexicon-based affect analysis can be viewed [here](https://bitbucket.org/rklinger/german-emotion-dictionary/src/master/). Even though for my analysis the dictionaries have been combined, I removed the words which could only be found in Dr. Mohammad's dictionary. For researchers who also want to make use of it, they can obtain it from his website for the prior named uses.
 
-The visualization was created using the python library [plotly](https://bitbucket.org/rklinger/german-emotion-dictionary/src/master/).
+The visualization was created using the Python library [plotly](https://bitbucket.org/rklinger/german-emotion-dictionary/src/master/).
 
-(Disclaimer: the paths in the scripts reflect directories in jupyter notebook, so they will need to be adjusted prior to usage. E. g. path = '../daten_bombe/ANNO_bom1871*' where the data was stored in a directory outside of the one I frequently used in daten_bombe containing further directories for every day the periodical was published, containing xml-files for every page of the issue)
+(Disclaimer: the paths in the scripts reflect directories in Jupyter Notebook, so they will need to be adjusted prior to usage. E. g. `path = '../daten_bombe/ANNO_bom1871*'` where the data was stored in a directory outside of the one I frequently used in `daten_bombe` containing further directories for every day the periodical was published, containing xml-files for every page of the issue)
 
 ## Datasets
-The dataset used for the analysis consisted of 2,593 issues with 21,349 pages made available on ANNO (Austrian Newspapers Online) by the austrian national library.
+
+The dataset used for the analysis consisted of 2,593 issues with 21,349 pages made available on ANNO (AustriaN Newspapers Online) by the Austrian National Library.
 
 ## Requirements
+
 For necessary installations and documents/corpora see the `requirements.txt` file.
 
 To install the packages automatically, run the following command:
@@ -35,12 +39,13 @@ pip install -r requirements.txt
 ```
 
 ## Disclaimer and further research
-The provided dictionaries, emotion-lists and results are not yet finished, but provide a starting point for a more thourough investigation of the emotions contained in "Die Bombe". Interesting further research points concerning the lexicon-based approach could entail a heavier focus on the intensities of the emotions. (For example, see Dr. Mohammads [NRC Emotion Intensity Lexicon](https://saifmohammad.com/WebPages/AffectIntensity.htm) regarding emotion intensity)
+
+The provided dictionaries, emotion lists and results are not yet finished, but provide a starting point for a more thourough investigation of the emotions contained in "Die Bombe". Interesting further research points concerning the lexicon-based approach could entail a heavier focus on the intensities of the emotions. (For example, see Dr. Mohammad's [NRC Emotion Intensity Lexicon](https://saifmohammad.com/WebPages/AffectIntensity.htm) regarding emotion intensity)
 
 ## Authors and acknowledgment
+
 Author: Lisa Gollner
 
-I give thanks to the members of the Labs-Team of the austrian national library Christoph Steindl, Simon Mayer und Johannes Knüchel for not only providing me with the necessary data but also for their continuing support during my internship. As well as the Digital Library Department as a whole for giving me such a warm welcome!
+I give thanks to the members of the ONB Labs Team of the Austrian National Library Christoph Steindl, Simon Mayer und Johannes Knüchel for not only providing me with the necessary data but also for their continuing support during my internship. As well as the Digital Library Department as a whole for giving me such a warm welcome!
 
 I also give thanks to Dr. Klinger, Dr. Mohammad and all the other researchers who provided access to their research and results, further enhancing my own project.
-