记一次翻译工具的开发-有了它,实现实时翻译还远吗? (2)

主要元素:

root=tk.Tk() root.title("netease youdao translation test") frm = tk.Frame(root) frm.grid(padx='80', pady='80') # label1=tk.Label(frm,text="选择待翻译文件:") # label1.grid(row=0,column=0) label=tk.Label(frm,text='选择语言类型:') label.grid(row=0,column=0) combox=ttk.Combobox(frm,textvariable=tk.StringVar(),width=38) combox["value"]=lang_type_dict combox.current(0) combox.bind("<<ComboboxSelected>>",get_lang_type) combox.grid(row=0,column=1) btn_start_rec = tk.Button(frm, text='开始录音', command=start_rec) btn_start_rec.grid(row=2, column=0) lb_Status = tk.Label(frm, text='Ready', anchor='w', fg='green') lb_Status.grid(row=2,column=1) btn_sure=tk.Button(frm,text="结束并识别",command=get_result) btn_sure.grid(row=3,column=0) root.mainloop()

2、音频录制部分,引入pyaudio库(需通过pip安装)来调用音频设备,录制接口要求的wav文件,并通过wave库存储文件:

def __init__(self, audio_path, language_type,is_recording): self.audio_path = audio_path, self.audio_file_name='' self.language_type = language_type, self.language=language_dict[language_type] print(language_dict[language_type]) self.is_recording=is_recording self.audio_chunk_size=1600 self.audio_channels=1 self.audio_format=pyaudio.paInt16 self.audio_rate=16000 def record_and_save(self): self.is_recording = True # self.audio_file_name=self.audio_path+'/recordtmp.wav' self.audio_file_name='/recordtmp.wav' threading.Thread(target=self.record,args=(self.audio_file_name,)).start() def record(self,file_name): print(file_name) p=pyaudio.PyAudio() stream=p.open( format=self.audio_format, channels=self.audio_channels, rate=self.audio_rate, input=True, frames_per_buffer=self.audio_chunk_size ) wf = wave.open(file_name, 'wb') wf.setnchannels(self.audio_channels) wf.setsampwidth(p.get_sample_size(self.audio_format)) wf.setframerate(self.audio_rate) # 读取数据写入文件 while self.is_recording: data = stream.read(self.audio_chunk_size) wf.writeframes(data) wf.close() stream.stop_stream() stream.close() p.terminate()

3、翻译接口调用部分:

def recognise(filepath,language_type): global file_path file_path=filepath nonce = str(uuid.uuid1()) curtime = str(int(time.time())) signStr = app_key + nonce + curtime + app_secret print(signStr) sign = encrypt(signStr) uri = "wss://openapi.youdao.com/stream_asropenapi?appKey=" + app_key + "&salt=" + nonce + "&curtime=" + curtime + \ "&sign=" + sign + "&version=v1&channel=1&format=wav&signType=v4&rate=16000&langType=" + language_type print(uri) start(uri, 1600) def encrypt(signStr): hash = hashlib.sha256() hash.update(signStr.encode('utf-8')) return hash.hexdigest() def on_message(ws, message): result=json.loads(message) try: resultmessage1 = result['result'][0] resultmessage2 = resultmessage1["st"]['sentence'] print(resultmessage2) except Exception as e: print('') def on_error(ws, error): print(error) def on_close(ws): print("### closed ###") def on_open(ws): count = 0 file_object = open(file_path, 'rb') while True: chunk_data = file_object.read(1600) ws.send(chunk_data, websocket.ABNF.OPCODE_BINARY) time.sleep(0.05) count = count + 1 if not chunk_data: break print(count) ws.send('{\"end\": \"true\"}', websocket.ABNF.OPCODE_BINARY) def start(uri,step): websocket.enableTrace(True) ws = websocket.WebSocketApp(uri, on_message=on_message, on_error=on_error, on_close=on_close) ws.on_open = on_open ws.run_forever() 总结

有道智云提供的接口一如既往的好用,这次开发主要的精力全都浪费在了由于我自己录制的音频质量差而识别失败的问题上,音频质量ok后,识别结果准确无误,下一步就是拿去翻译了,有了有道智云API,实现实时翻译也可以如此简单!

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wspzzd.html