[PYTHON] 100 language processing knock 2020 [00 ~ 49 answer]

This article is a continuation of 100 Language Processing Knock 2020 [00 ~ 39 Answers].

This article deals with Chapter 5, 40-49, but most of the problems are written in the form of adding functionality to the previous problem, so only 48, 49 codes are calculated. If you would like to see them individually, please refer to the links below.

Link

Language processing 100 knock 2020 Chapter 5: Dependency analysis answer

48. Extracting paths from nouns to roots

import itertools
class Morph:
    def __init__(self, surface, base, pos, pos1):
        self.surface = surface
        self.base = base
        self.pos = pos
        self.pos1 = pos1
    
    def show(self):
        print (self.surface, self.base, self.pos, self.pos1)
        
class Chunk:
    def __init__(self, sentence_id, chunk_id, dst, srcs):
        self.sentence_id = sentence_id
        self.chunk_id = chunk_id
        self.morphs = []
        self.dst = dst
        self.srcs = srcs
        self.has_noun = False
        self.has_verb = False
        self.has_particle = False
        self.surfaces = ''
        self.first_verb = None
        self.particle = []
        self.sahen_wo = False

    def show_morphs(self):
        morphs = ''
        for morph in self.morphs:
            morphs += morph.surface
        print ("morphs:",morphs)
    def show_chunk_id(self):
        print ("==========")
        print ("chunk_id:",self.chunk_id)
    def show_sentence_id(self):
        if (self.chunk_id == 0):
            print ("====================")
            print ("sentence_id:",self.sentence_id)      
    def show_dst(self):
        print ("dst:",self.dst)     
    def show_srcs(self):
        print ("srcs:",self.srcs[self.chunk_id])          
        
path = 'neko.txt.cabocha'
with open(path) as f:
    text = f.read().split('\n')
result = []
morphs = []
chunks = []
srcs = [[]]
chunk = None
sentence_id = 0
chunk_id = 0

for line in text[:-1]:
    if line == 'EOS':
        result.append(morphs)
        morphs = []
        sentence_id += 1
        chunk_id = 0
        srcs = [[]]

    elif line[0] == '*':
        if chunk:
            chunks.append(chunk)
        dst = int(line.split()[2][:-1])
        diff = dst + 1- len(srcs)
        ex = [[] for _ in range(diff)]
        srcs.extend(ex)
        if dst!=-1:
            srcs[dst].append(chunk_id)
        chunk = Chunk(sentence_id, chunk_id, dst, srcs)
        chunk_id += 1
      
    else:
        ls = line.split('\t')
        d = {}
        tmp = ls[1].split(',')
        morph = Morph(ls[0],tmp[6],tmp[0],tmp[1])
        morphs.append(morph)
        chunk.morphs.append(morph)
        
else:
    chunks.append(chunk)

sentences = [[] for _ in range(len(chunks))]
for chunk in chunks:
    morphs = ''
    for morph in chunk.morphs:
        morphs += morph.surface
        if morph.pos == 'noun':
            chunk.has_noun = True
    sentences[chunk.sentence_id].append([morphs,chunk.dst,chunk.has_noun])

def rec(sentence,d,ans):
    if d == -1:
        return ans
    else:
        return rec(sentence,sentence[d][1],ans+' -> '+sentence[d][0])

with open('48.txt', mode='w') as f:
    for i, sentence in enumerate(sentences):
        for s,d,has_noun in sentence:
            if has_noun:
                ans = rec(sentence,d,s)
                ans = ans.replace(" ","").replace("。","").replace("、","")
                print (ans)
                f.write(ans+'\n')

49. Extraction of dependency paths between nouns

import itertools
class Morph:
    def __init__(self, surface, base, pos, pos1):
        self.surface = surface
        self.base = base
        self.pos = pos
        self.pos1 = pos1
    
    def show(self):
        print (self.surface, self.base, self.pos, self.pos1)
        
class Chunk:
    def __init__(self, sentence_id, chunk_id, dst, srcs):
        self.sentence_id = sentence_id
        self.chunk_id = chunk_id
        self.morphs = []
        self.dst = dst
        self.srcs = srcs
        self.has_noun = False
        self.has_verb = False
        self.has_particle = False
        self.surfaces = ''
        self.first_verb = None
        self.particle = []
        self.sahen_wo = False

    def show_morphs(self):
        morphs = ''
        for morph in self.morphs:
            morphs += morph.surface
        print ("morphs:",morphs)
    def show_chunk_id(self):
        print ("==========")
        print ("chunk_id:",self.chunk_id)
    def show_sentence_id(self):
        if (self.chunk_id == 0):
            print ("====================")
            print ("sentence_id:",self.sentence_id)      
    def show_dst(self):
        print ("dst:",self.dst)     
    def show_srcs(self):
        print ("srcs:",self.srcs[self.chunk_id])          
        
path = 'neko.txt.cabocha'
with open(path) as f:
    text = f.read().split('\n')
result = []
morphs = []
chunks = []
srcs = [[]]
chunk = None
sentence_id = 0
chunk_id = 0

for line in text[:-1]:
    if line == 'EOS':
        result.append(morphs)
        morphs = []
        sentence_id += 1
        chunk_id = 0
        srcs = [[]]

    elif line[0] == '*':
        if chunk:
            chunks.append(chunk)
        dst = int(line.split()[2][:-1])
        diff = dst + 1- len(srcs)
        ex = [[] for _ in range(diff)]
        srcs.extend(ex)
        if dst!=-1:
            srcs[dst].append(chunk_id)
        chunk = Chunk(sentence_id, chunk_id, dst, srcs)
        chunk_id += 1
      
    else:
        ls = line.split('\t')
        d = {}
        tmp = ls[1].split(',')
        morph = Morph(ls[0],tmp[6],tmp[0],tmp[1])
        morphs.append(morph)
        chunk.morphs.append(morph)
        
else:
    chunks.append(chunk)

sentences = [[] for _ in range(len(chunks))]
for chunk in chunks:
    morphs = ''
    for morph in chunk.morphs:
        morphs += morph.surface
        if morph.pos == 'noun':
            chunk.has_noun = True
    sentences[chunk.sentence_id].append([morphs,chunk.dst,chunk.has_noun])

def rec(sentence,d,ans):
    if d == -1:
        return ans
    else:
        return rec(sentence,sentence[d][1],ans+' -> '+sentence[d][0])

with open('48.txt', mode='w') as f:
    for i, sentence in enumerate(sentences):
        for s,d,has_noun in sentence:
            if has_noun:
                ans = rec(sentence,d,s)
                ans = ans.replace(" ","").replace("。","").replace("、","")
                print (ans)
                f.write(ans+'\n')

Recommended Posts

100 language processing knock 2020 [00 ~ 39 answer]
100 language processing knock 2020 [00-79 answer]
100 language processing knock 2020 [00 ~ 69 answer]
100 language processing knock 2020 [00 ~ 49 answer]
100 language processing knock 2020 [00 ~ 59 answer]
100 Language Processing Knock (2020): 28
100 Language Processing Knock (2020): 38
100 language processing knock 00 ~ 02
100 Language Processing Knock 2020 Chapter 1
100 Amateur Language Processing Knock: 17
100 Language Processing Knock-52: Stemming
100 Language Processing Knock Chapter 1
100 language processing knocks 2020 [00 ~ 89 answer]
100 Amateur Language Processing Knock: 07
Language processing 100 knocks 00 ~ 09 Answer
100 Language Processing Knock 2020 Chapter 3
100 Language Processing Knock 2020 Chapter 2
100 Amateur Language Processing Knock: 09
100 Amateur Language Processing Knock: 47
100 Language Processing Knock-53: Tokenization
100 Amateur Language Processing Knock: 97
100 Amateur Language Processing Knock: 67
100 Language Processing with Python Knock 2015
100 Language Processing Knock-51: Word Clipping
100 Language Processing Knock-58: Tuple Extraction
100 Language Processing Knock-57: Dependency Analysis
100 language processing knock-50: sentence break
100 Language Processing Knock Chapter 1 (Python)
100 Language Processing Knock Chapter 2 (Python)
100 Language Processing Knock-25: Template Extraction
100 Language Processing Knock-87: Word Similarity
I tried 100 language processing knock 2020
100 language processing knock-56: co-reference analysis
Solving 100 Language Processing Knock 2020 (01. "Patatokukashi")
100 Amateur Language Processing Knock: Summary
100 Language Processing Knock 2020 Chapter 2: UNIX Commands
100 Language Processing Knock 2015 Chapter 5 Dependency Analysis (40-49)
100 Language Processing Knock with Python (Chapter 1)
100 Language Processing Knock Chapter 1 in Python
100 Language Processing Knock 2020 Chapter 4: Morphological Analysis
100 Language Processing Knock 2020 Chapter 9: RNN, CNN
100 language processing knock-76 (using scikit-learn): labeling
100 language processing knock-55: named entity extraction
I tried 100 language processing knock 2020: Chapter 3
100 Language Processing Knock-82 (Context Word): Context Extraction
100 Language Processing Knock with Python (Chapter 3)
100 Language Processing Knock: Chapter 1 Preparatory Movement
100 Language Processing Knock 2020 Chapter 6: Machine Learning
100 Language Processing Knock Chapter 4: Morphological Analysis
Language processing 100 knock-86: Word vector display
100 Language Processing Knock 2020 Chapter 10: Machine Translation (90-98)
100 Language Processing Knock 2020 Chapter 5: Dependency Analysis
100 Language Processing Knock-28: MediaWiki Markup Removal
100 Language Processing Knock 2020 Chapter 7: Word Vector
100 Language Processing Knock 2020 Chapter 8: Neural Net
100 Language Processing Knock-59: Analysis of S-expressions
Python beginner tried 100 language processing knock 2015 (05 ~ 09)
100 Language Processing Knock-31 (using pandas): Verb
100 language processing knock 2020 "for Google Colaboratory"
I tried 100 language processing knock 2020: Chapter 1
100 Language Processing Knock 2020 Chapter 1: Preparatory Movement