After almost two decades after taking two years of college classes on the Japanese language, I have been using for the last two years a tool that is helping me overcome a major obstacle on the path toward language proficiency. Up until this second anniversary, WaniKani has helped me recognize and read more than 2,657 words composed of kanji.
It has taken me about two years to reach level 45 out of 60 offered by WaniKani. I did not expect to experience burnout, but I did at levels 20 through 23. I needed more than half a year, or 25% of my time with WaniKani, for just those four levels. If I maintain an average of 17 days per level going forward, I will have been exposed to all review items in eight months. It is possible to rush through the last 15 levels in four months, but my experience with burnout is encouraging me to enjoy the process and continue at a moderate pace.
I am pleased with my accuracy and the balance of items between various SRS stages. A review item requires six months to transition from “enlightened” to “burned.” To become burned, an enlightened review item’s meaning and reading must be remembered correctly after it has not been reviewed for six months. A review item is demoted to a lower SRS stage, if it is not remembered correctly. In theory, 3,773 burned items have become part of my long-term memory. Remembering these items is reinforced by studying higher level textbooks and consuming native material. For example, music videos for anime songs provide lyrics containing kanji. Reading anime song lyrics reinforces my ability to recognize and read those kanji.
WaniKani has made studying intermediate textbooks and consuming native material more enjoyable for me. Less time is spent looking up common words in a dictionary. This has allowed me to focus on the actual content of the media. I have no regrets signing up for a lifetime WaniKani membership, and I recommend it to anyone interested in learning Japanese.
「人間として正しいことは何なのか」ということを
基準に判断を行わなくてはならない。
We must make decisions based on what is right as human beings.
Parts List:
- AMD Ryzen 5 3600X w/Wraith Spire Cooler [$189.99]
- MSI MPG X570 Gaming Plus ATX [$154.99]
- MSI Radeon RX5700 XT Gaming X 8GB 256-bit GDDR6 PCI Express 4.0 [$409.99]
- G.SKILL Ripjaws V DDR4-3200 CL16-18-18-38 1.35V 128GB (4x32GB) 288-pin [$429.98]
- Samsung 970 EVO Plus NVMe M.2 SSD 2000GB [$249.99]
- Samsung 970 EVO Plus NVMe M.2 SSD 500GB [$104.99]
- NZXT H510i ATX Case [$99.99]
- Corsair RM850 80+G Fully Modular ATX PSU [$159.99]
- Microsoft Windows 10 Pro (Retail) [$199.99]
128 GB of RAM is overkill. A G.SKILL 32GB (4x8GB) F4-3200C16Q-32GVK kit would have saved $190.98. Not purchasing a 500GB SSD would have saved $104.99.
On idle: CPU temp 38℃, System temp 31℃, CPU fan @ 1325RPM. Case fans @ 950RPM. Case noise (as reported by NZXTcam): 48db.
I am planning to implement a bare-bones system to backup pictures, videos, financial records, and software development projects. Among Google, Microsoft, and Amazon, I find that the long term storage services offered by Amazon to be cost effective at $0.0036 per GB per month or $3.60/mo for 1TB. Amazon S3 Glacier also provides a mechanism for fetching an inventory of files uploaded to their service. This inventory includes a tree hash or checksum for each “archive” that is uploaded. (At $0.00099 per GB per month or $0.99/mo for 1 TB, an even more cost effective alternative is using the S3 Glacier Deep Archive storage class for data stored in Amazon S3 buckets. Amazon S3 Glacier is different from Amazon S3 with the S3 Glacier Deep Archive storage class in that the former deals with vaults and archives whereas the latter deals with buckets and storage classes. Unfortunately, the Amazon S3 service does not provide a reliable mechanism for retrieving checksum data.)
Amazon claims their S3 services achieve 99.999999999% (“eleven nines”) durability. I am uncertain that I can achieve the same level of durability independently. As part of my backup system, I need to periodically check for differences between my local files and my backups. To confirm that local copies of archives uploaded to Amazon Web Services are identical, I implemented a standalone Python script that generates the Amazon S3 Glacier tree hash checksum: treehash.py.
The script can be used with a command line interface as follows:
python3 treehash.py inputfile.bin