R/fledgingr.R
tokenize.Rd
Tokenize and Analyze Japanese Texts
tokenize(x, mode = c("C", "B", "A"))
Text to tokenize.
Unit to split text.
"A": short
"A"
"B": middle
"B"
"C": long
"C"