Iteration 1, loss = 1.63217924 Iteration 2, loss = 1.29897925 Iteration 3, loss = 1.27730382 Iteration 4, loss = 1.25522668 Iteration 5, loss = 1.24093841 Iteration 6, loss = 1.22688958 Iteration 7, loss = 1.21007260 Iteration 8, loss = 1.19890125 Iteration 9, loss = 1.19354226 Iteration 10, loss = 1.17991083 Iteration 11, loss = 1.17395744 Iteration 12, loss = 1.16664669 Iteration 13, loss = 1.16354717 Iteration 14, loss = 1.14823746 Iteration 15, loss = 1.14443052 Iteration 16, loss = 1.13545109 Iteration 17, loss = 1.13031813 Iteration 18, loss = 1.11902953 Iteration 19, loss = 1.11372865 Iteration 20, loss = 1.10703823 Iteration 21, loss = 1.09622826 Iteration 22, loss = 1.08154347 Iteration 23, loss = 1.07831238 Iteration 24, loss = 1.07372885 Iteration 25, loss = 1.06327184 Iteration 26, loss = 1.06619736 Iteration 27, loss = 1.04959434 Iteration 28, loss = 1.04284265 Iteration 29, loss = 1.02761738 Iteration 30, loss = 1.02272489 Iteration 31, loss = 1.01919685 Iteration 32, loss = 1.00956499 Iteration 33, loss = 0.99425610 Iteration 34, loss = 0.99532218 Iteration 35, loss = 0.99069944 Iteration 36, loss = 0.98671093 Iteration 37, loss = 0.96517603 Iteration 38, loss = 0.95891830 Iteration 39, loss = 0.95367584 Iteration 40, loss = 0.94397054 Iteration 41, loss = 0.94235047 Iteration 42, loss = 0.93516203 Iteration 43, loss = 0.92985783 Iteration 44, loss = 0.92109383 Iteration 45, loss = 0.90783774 Iteration 46, loss = 0.92519276 Iteration 47, loss = 0.90078778 Iteration 48, loss = 0.90864331 Iteration 49, loss = 0.89139658 Iteration 50, loss = 0.89450404 Iteration 51, loss = 0.88815468 Iteration 52, loss = 0.86398860 Iteration 53, loss = 0.87716786 Iteration 54, loss = 0.87152422 Iteration 55, loss = 0.85379970 Iteration 56, loss = 0.85295028 Iteration 57, loss = 0.84762872 Iteration 58, loss = 0.85117170 Iteration 59, loss = 0.84828352 Iteration 60, loss = 0.83197196 Iteration 61, loss = 0.81883705 Iteration 62, loss = 0.82985756 Iteration 63, loss = 0.82917493 Iteration 64, loss = 0.82029196 Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping. Confusion matrix 668 102 32 109 21 158 1499 200 517 93 86 187 2155 1126 218 52 111 291 3784 428 11 66 120 1401 2187 ************************************************************ Recall Precision F1 A1 0.716738197425 0.685128205128 0.700576822234 A2 0.607620591812 0.762849872774 0.676444043321 B1 0.57131495228 0.770192994996 0.65601217656 B2 0.810972996142 0.54548075537 0.652245109024 C1 0.577807133421 0.74211062097 0.649732620321 ------------------------------------------------------------ Average F1: 0.667002154292 Best classifier: siwoco-svalex-d-topic-bigram-compact-nh.csv-mlp-1000_500.pickle, F1: 0.667002154292 Iteration 1, loss = 1.41151680 Iteration 2, loss = 1.27024759 Iteration 3, loss = 1.23111660 Iteration 4, loss = 1.20230034 Iteration 5, loss = 1.18394528 Iteration 6, loss = 1.15792778 Iteration 7, loss = 1.14594665 Iteration 8, loss = 1.13226360 Iteration 9, loss = 1.11452620 Iteration 10, loss = 1.09820225 Iteration 11, loss = 1.07923586 Iteration 12, loss = 1.06433618 Iteration 13, loss = 1.04888692 Iteration 14, loss = 1.03050483 Iteration 15, loss = 1.01663433 Iteration 16, loss = 1.00652480 Iteration 17, loss = 0.98714291 Iteration 18, loss = 0.97500442 Iteration 19, loss = 0.95951131 Iteration 20, loss = 0.94523237 Iteration 21, loss = 0.93611174 Iteration 22, loss = 0.91550521 Iteration 23, loss = 0.91389205 Iteration 24, loss = 0.90308356 Iteration 25, loss = 0.89241600 Iteration 26, loss = 0.87864259 Iteration 27, loss = 0.87569415 Iteration 28, loss = 0.86544661 Iteration 29, loss = 0.85119932 Iteration 30, loss = 0.84301555 Iteration 31, loss = 0.83845531 Iteration 32, loss = 0.83966548 Iteration 33, loss = 0.82359338 Iteration 34, loss = 0.81910791 Iteration 35, loss = 0.82000863 Iteration 36, loss = 0.81091616 Iteration 37, loss = 0.80667950 Iteration 38, loss = 0.80104551 Iteration 39, loss = 0.79467179 Iteration 40, loss = 0.79057311 Iteration 41, loss = 0.78430801 Iteration 42, loss = 0.79344981 Iteration 43, loss = 0.78106335 Iteration 44, loss = 0.77546895 Iteration 45, loss = 0.76744378 Iteration 46, loss = 0.77006133 Iteration 47, loss = 0.76358232 Iteration 48, loss = 0.77591392 Iteration 49, loss = 0.75622101 Iteration 50, loss = 0.76075541 Iteration 51, loss = 0.75058510 Iteration 52, loss = 0.74988546 Iteration 53, loss = 0.74199794 Iteration 54, loss = 0.74396934 Iteration 55, loss = 0.74448147 Iteration 56, loss = 0.74018260 Iteration 57, loss = 0.73281807 Iteration 58, loss = 0.74013454 Iteration 59, loss = 0.73182494 Iteration 60, loss = 0.73216567 Iteration 61, loss = 0.72403816 Iteration 62, loss = 0.73906084 Iteration 63, loss = 0.73005007 Iteration 64, loss = 0.72139841 Iteration 65, loss = 0.71720542 Iteration 66, loss = 0.72225095 Iteration 67, loss = 0.71639589 Iteration 68, loss = 0.72093706 Iteration 69, loss = 0.71324224 Iteration 70, loss = 0.73219052 Iteration 71, loss = 0.71278867 Iteration 72, loss = 0.70307615 Iteration 73, loss = 0.69047978 Iteration 74, loss = 0.70566755 Iteration 75, loss = 0.71029788 Iteration 76, loss = 0.70625470 Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping. Confusion matrix 703 82 28 49 70 129 1712 90 191 345 77 176 2226 487 806 34 99 135 3205 1193 10 34 50 481 3210 ************************************************************ Recall Precision F1 A1 0.754291845494 0.737670514166 0.745888594164 A2 0.693960275638 0.814075130766 0.749234135667 B1 0.5901378579 0.880189798339 0.706554515156 B2 0.686883840549 0.726263312939 0.706024892609 C1 0.848084544254 0.570768136558 0.682325433096 ------------------------------------------------------------ Average F1: 0.718005514139 Best classifier: siwoco-svalex-d-topic-bigram-compact-nh.csv-mlp-1000_200.pickle, F1: 0.718005514139 Iteration 1, loss = 1.39669513 Iteration 2, loss = 1.23092260 Iteration 3, loss = 1.18372219 Iteration 4, loss = 1.15007254 Iteration 5, loss = 1.11795176 Iteration 6, loss = 1.09203509 Iteration 7, loss = 1.06191715 Iteration 8, loss = 1.03713142 Iteration 9, loss = 1.00780452 Iteration 10, loss = 0.97911537 Iteration 11, loss = 0.95793701 Iteration 12, loss = 0.92457695 Iteration 13, loss = 0.89971460 Iteration 14, loss = 0.87730020 Iteration 15, loss = 0.86278500 Iteration 16, loss = 0.83398762 Iteration 17, loss = 0.81249432 Iteration 18, loss = 0.79202377 Iteration 19, loss = 0.77175226 Iteration 20, loss = 0.76830272 Iteration 21, loss = 0.75769455 Iteration 22, loss = 0.75178979 Iteration 23, loss = 0.73023983 Iteration 24, loss = 0.71083299 Iteration 25, loss = 0.71041000 Iteration 26, loss = 0.70697462 Iteration 27, loss = 0.68547457 Iteration 28, loss = 0.69224941 Iteration 29, loss = 0.68648225 Iteration 30, loss = 0.67430251 Iteration 31, loss = 0.66194001 Iteration 32, loss = 0.65417227 Iteration 33, loss = 0.66224307 Iteration 34, loss = 0.65107110 Iteration 35, loss = 0.67200574 Iteration 36, loss = 0.63604518 Iteration 37, loss = 0.65056890 Iteration 38, loss = 0.64771564 Iteration 39, loss = 0.63726401 Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping. Confusion matrix 722 59 32 81 38 141 1722 94 352 158 58 95 2468 792 359 25 65 122 3869 585 6 31 47 919 2782 ************************************************************ Recall Precision F1 A1 0.774678111588 0.758403361345 0.766454352442 A2 0.698013781921 0.87322515213 0.775850416761 B1 0.654294803818 0.893231994209 0.755317521041 B2 0.829189884269 0.643439215034 0.724599681618 C1 0.73500660502 0.709331973483 0.721941092513 ------------------------------------------------------------ Average F1: 0.748832612875 Best classifier: siwoco-svalex-d-topic-bigram-compact-nh.csv-mlp-1000_500_200.pickle, F1: 0.748832612875 Iteration 1, loss = 1.33104228 Iteration 2, loss = 1.20346920 Iteration 3, loss = 1.16401840 Iteration 4, loss = 1.13328039 Iteration 5, loss = 1.10234616 Iteration 6, loss = 1.07127512 Iteration 7, loss = 1.04500742 Iteration 8, loss = 1.00936258 Iteration 9, loss = 0.97727403 Iteration 10, loss = 0.94370197 Iteration 11, loss = 0.91181290 Iteration 12, loss = 0.88612274 Iteration 13, loss = 0.86524172 Iteration 14, loss = 0.82825051 Iteration 15, loss = 0.81169525 Iteration 16, loss = 0.78714301 Iteration 17, loss = 0.77279375 Iteration 18, loss = 0.74496337 Iteration 19, loss = 0.74414057 Iteration 20, loss = 0.72335508 Iteration 21, loss = 0.70204829 Iteration 22, loss = 0.69990612 Iteration 23, loss = 0.68343503 Iteration 24, loss = 0.67307930 Iteration 25, loss = 0.67086144 Iteration 26, loss = 0.66069142 Iteration 27, loss = 0.65106786 Iteration 28, loss = 0.65464785 Iteration 29, loss = 0.64499308 Iteration 30, loss = 0.65077066 Iteration 31, loss = 0.61696463 Iteration 32, loss = 0.64215493 Iteration 33, loss = 0.61659225 Iteration 34, loss = 0.61198555 Iteration 35, loss = 0.61709048 Iteration 36, loss = 0.60790091 Iteration 37, loss = 0.61469324 Iteration 38, loss = 0.60138699 Iteration 39, loss = 0.59863811 Iteration 40, loss = 0.59568754 Iteration 41, loss = 0.59865195 Iteration 42, loss = 0.60172210 Iteration 43, loss = 0.59474056 Iteration 44, loss = 0.58407398 Iteration 45, loss = 0.57945125 Iteration 46, loss = 0.59990012 Iteration 47, loss = 0.59651762 Iteration 48, loss = 0.57310489 Iteration 49, loss = 0.57561555 Iteration 50, loss = 0.59178559 Iteration 51, loss = 0.56555400 Iteration 52, loss = 0.57246449 Iteration 53, loss = 0.58492291 Iteration 54, loss = 0.56626347 Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping. Confusion matrix 782 20 16 61 53 146 1789 63 241 228 57 70 2592 550 503 33 54 102 3674 803 5 6 55 604 3115 ************************************************************ Recall Precision F1 A1 0.839055793991 0.764418377322 0.8 A2 0.725172274017 0.922640536359 0.81207444394 B1 0.687168610817 0.916548797737 0.785454545455 B2 0.787398199743 0.716179337232 0.750102082483 C1 0.822985468956 0.662484049341 0.734063862378 ------------------------------------------------------------ Average F1: 0.776338986851 Best classifier: siwoco-svalex-d-topic-bigram-compact-nh.csv-mlp-1000_500_200_50.pickle, F1: 0.776338986851 Iteration 1, loss = 1.32148322 Iteration 2, loss = 1.19000170 Iteration 3, loss = 1.14051782 Iteration 4, loss = 1.10326718 Iteration 5, loss = 1.06418372 Iteration 6, loss = 1.02701082 Iteration 7, loss = 0.98490259 Iteration 8, loss = 0.94788270 Iteration 9, loss = 0.91161392 Iteration 10, loss = 0.87645161 Iteration 11, loss = 0.84318358 Iteration 12, loss = 0.81503214 Iteration 13, loss = 0.79622325 Iteration 14, loss = 0.75960748 Iteration 15, loss = 0.74092739 Iteration 16, loss = 0.72395460 Iteration 17, loss = 0.70488073 Iteration 18, loss = 0.69911362 Iteration 19, loss = 0.69003583 Iteration 20, loss = 0.68276400 Iteration 21, loss = 0.66432495 Iteration 22, loss = 0.66111947 Iteration 23, loss = 0.65595776 Iteration 24, loss = 0.64046946 Iteration 25, loss = 0.63676308 Iteration 26, loss = 0.62556736 Iteration 27, loss = 0.64609855 Iteration 28, loss = 0.62989166 Iteration 29, loss = 0.62109403 Iteration 30, loss = 0.61390071 Iteration 31, loss = 0.61081286 Iteration 32, loss = 0.61573452 Iteration 33, loss = 0.59100513 Iteration 34, loss = 0.59330247 Iteration 35, loss = 0.61128970 Iteration 36, loss = 0.60331216 Training loss did not improve more than tol=0.000100 for two consecutive epochs. Stopping.