Donc pour bencher son cluster ceph :
https://tracker.ceph.com/projects/c [...] erformance
https://www.proxmox.com/en/download [...] -benchmark (ici quelques résultats sur du 1/10/100G)
rados :
NAME rados - rados object storage utility
|
Pour lister ses pools
A noter car vous allez surtout pas oublier de mettre le -p nomdupool sinon le truc vous guelera son aide sur 12 parsec.
Créer votre pool pour bencher (histoire de pas mélanger avec celui que vous utilisez déjà). Eviter "bench" puisque la commande pour bencher sera bench.
Un nouveau lspool :
root@xat-pve0:~# rados lspools poolceph xatbench
|
un df
root@xat-pve0:~# rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR xatbench 0B 0 0 0 0 0 0 0 0B 0 0B poolceph 11.9GiB 3211 0 9633 0 0 0 168887 2.58GiB 188212 29.0GiB total_objects 3211 total_used 35.6GiB total_avail 680GiB total_space 715GiB
|
Si vous n'êtes pas encore au courant de l'aide qu'on vous a gueulé 100 fois à la tronche, je précise :
bench <seconds> write|seq|rand [-t concurrent_operations] [--no-cleanup] [--run-name run_name] [--no-hints] default is 16 concurrent IOs and 4 MB ops default is to clean up after write benchmark default run-name is 'benchmark_last_metadata' |
et le man :
bench seconds mode [ -b objsize ] [ -t threads ] Benchmark for seconds. The mode can be write, seq, or rand. seq and rand are read benchmarks, either sequential or random. Before running one of the reading benchmarks, run a write benchmark with the --no-cleanup option. The default object size is 4 MB, and the default number of simulated threads (parallel writes) is 16. The --run-name <label> option is useful for benchmarking a workload test from multiple clients. The <label> is an arbitrary object name. It is "benchmark_last_metadata" by default, and is used as the underlying object name for "read" and "write" ops. Note: -b objsize option is valid only in write mode. Note: write and seq must be run on the same host otherwise the objects created by write will have names that will fail seq. |
Tester l'écriture :
rados -p xatbench bench 60 write |
Ce qui donne :
root@xat-pve0:~# rados -p xatbench bench 60 write hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects Object prefix: benchmark_data_xat-pve0_485863 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0 0 0 0 - 0 1 16 25 9 35.9965 36 0.998907 0.574201 2 16 43 27 53.994 72 0.684063 0.899475 3 16 62 46 61.3272 76 0.844223 0.899737 4 16 77 61 60.9943 60 1.01261 0.896084 5 16 90 74 59.1945 52 0.961734 0.9456 6 16 108 92 61.3278 72 0.70872 0.958785 7 16 126 110 62.8516 72 0.806659 0.953023 8 16 143 127 63.4945 68 0.696224 0.931923 9 16 162 146 64.8833 76 1.15378 0.932752 10 16 179 163 65.1944 68 0.887739 0.934731 11 16 196 180 65.449 68 1.15079 0.936336 12 16 213 197 65.6611 68 0.688204 0.9338 13 16 232 216 66.456 76 0.597817 0.921498 14 16 248 232 66.2802 64 0.580539 0.922821 15 16 265 249 66.3945 68 1.73526 0.93345 16 16 280 264 65.9946 60 0.876674 0.932196 17 16 299 283 66.5828 76 0.594041 0.939327 18 16 317 301 66.8834 72 1.11526 0.938618 19 16 331 315 66.3104 56 1.11578 0.939416 2019-03-29 15:10:49.289101 min lat: 0.188397 max lat: 1.80652 avg lat: 0.942546 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 20 16 345 329 65.7946 56 0.891073 0.942546 21 16 364 348 66.2802 76 0.903314 0.94632 22 16 379 363 65.9947 60 1.31951 0.946221 23 16 398 382 66.4294 76 1.01748 0.947443 24 16 413 397 66.1613 60 0.925006 0.947517 25 16 432 416 66.5547 76 0.603453 0.943786 26 16 448 432 66.4562 64 0.837269 0.943385 27 16 464 448 66.365 64 0.725709 0.945151 28 16 479 463 66.1376 60 0.94477 0.949264 29 16 495 479 66.0637 64 0.940942 0.949215 30 16 511 495 65.9947 64 0.888126 0.949205 31 16 529 513 66.1882 72 0.888514 0.951565 32 16 544 528 65.9947 60 1.08185 0.954544 33 16 559 543 65.8128 60 1.09412 0.954658 34 16 579 563 66.2299 80 0.821289 0.95374 35 16 595 579 66.1661 64 0.801038 0.954038 36 16 613 597 66.328 72 0.865304 0.949621 37 16 630 614 66.373 68 1.04715 0.949593 38 16 647 631 66.4157 68 1.15502 0.951346 39 16 665 649 66.5588 72 1.19923 0.948586 2019-03-29 15:11:09.290671 min lat: 0.188397 max lat: 1.80652 avg lat: 0.949873 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 40 16 681 665 66.4947 64 1.21408 0.949873 41 16 698 682 66.5312 68 1.13591 0.949663 42 16 714 698 66.4708 64 1.05628 0.950075 43 16 730 714 66.4132 64 1.03765 0.951053 44 16 748 732 66.5401 72 0.886634 0.952005 45 16 767 751 66.7501 76 0.889279 0.950904 46 16 783 767 66.6903 64 0.967731 0.948196 47 16 798 782 66.5478 60 1.25205 0.94949 48 16 819 803 66.9113 84 0.933997 0.947669 49 16 834 818 66.7701 60 0.955641 0.948067 50 16 852 836 66.8746 72 0.804057 0.947302 51 16 868 852 66.8181 64 0.962667 0.94659 52 16 886 870 66.9177 72 0.936098 0.948383 53 16 900 884 66.7116 56 1.12841 0.947633 54 16 923 907 67.1798 92 0.563872 0.945004 55 16 937 921 66.9764 56 1.05927 0.945962 56 16 953 937 66.9232 64 1.0027 0.947432 57 16 970 954 66.942 68 0.807536 0.946948 58 16 987 971 66.9601 68 0.994893 0.946941 59 16 1005 989 67.0455 72 0.818603 0.948479 2019-03-29 15:11:29.292299 min lat: 0.188397 max lat: 1.80652 avg lat: 0.946895 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 60 16 1019 1003 66.8613 56 1.109 0.946895 Total time run: 60.460432 Total writes made: 1020 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 67.4822 Stddev Bandwidth: 8.59142 Max bandwidth (MB/sec): 92 Min bandwidth (MB/sec): 36 Average IOPS: 16 Stddev IOPS: 2 Max IOPS: 23 Min IOPS: 9 Average Latency(s): 0.947795 Stddev Latency(s): 0.265853 Max latency(s): 1.80652 Min latency(s): 0.188397 Cleaning up (deleting benchmark objects) Removed 1020 objects Clean up completed and total clean up time :0.341064 |
Ne faites pas comme moi, n'oubliez pas "--no-clean-cleanup"
Donc ça sera plutôt :
rados -p xatbench bench 60 write --no-cleanup |
Test lecture en séquentiel :
rados -p xatbench bench 60 seq --run-name benchmark_last_metadata hints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0 0 0 0 - 0 1 16 128 112 447.927 448 0.0103482 0.0985826 2 16 221 205 409.935 372 0.0779508 0.128398 3 16 284 268 357.283 252 0.010306 0.1538 4 16 344 328 327.959 240 0.0105356 0.177442 5 16 417 401 320.761 292 0.0188097 0.182753 6 16 499 483 321.962 328 0.014558 0.185622 7 16 575 559 319.392 304 0.084684 0.189243 8 16 645 629 314.465 280 0.0892275 0.192079 9 16 722 706 313.743 308 0.0101148 0.195078 10 16 801 785 313.966 316 0.0146307 0.197449 11 16 881 865 314.512 320 0.0104793 0.197556 12 16 959 943 314.3 312 0.0101865 0.197969 13 16 1021 1005 309.197 248 0.0104307 0.202135 Total time run: 13.591278 Total reads made: 1021 Read size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 300.487 Average IOPS: 75 Stddev IOPS: 13 Max IOPS: 112 Min IOPS: 60 Average Latency(s): 0.211934 Max latency(s): 1.1223 Min latency(s): 0.00998481 |
en rand
root@xat-pve0:~# rados -p xatbench bench 60 rand hints = 1 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 0 0 0 0 0 0 - 0 1 16 111 95 379.949 380 0.601081 0.11559 2 16 190 174 347.959 316 0.65014 0.154652 3 16 259 243 323.963 276 0.197007 0.173761 4 16 313 297 296.967 216 0.0102301 0.196032 5 16 404 388 310.366 364 0.139236 0.19083 6 16 475 459 305.967 284 0.799421 0.197131 7 16 562 546 311.968 348 0.0102545 0.194157 8 16 623 607 303.47 244 0.0102083 0.20109 9 16 695 679 301.749 288 0.81836 0.202635 10 16 783 767 306.77 352 0.00139433 0.201824 11 16 856 840 305.425 292 0.0146209 0.203744 12 16 927 911 303.638 284 0.00137652 0.203593 13 16 1000 984 302.741 292 0.633875 0.206102 14 16 1069 1053 300.829 276 0.0102572 0.207132 15 16 1131 1115 297.306 248 0.722594 0.209687 16 16 1210 1194 298.472 316 0.0104317 0.209589 17 16 1276 1260 296.444 264 0.0170445 0.211716 18 16 1343 1327 294.862 268 0.698329 0.21309 19 16 1406 1390 292.605 252 0.758855 0.214451 2019-03-29 15:18:13.776643 min lat: 0.00111935 max lat: 0.906395 avg lat: 0.212661 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 20 16 1490 1474 294.774 336 0.0102305 0.212661 21 16 1564 1548 294.83 296 0.796544 0.213308 22 16 1627 1611 292.883 252 0.0103829 0.214817 23 16 1693 1677 291.626 264 0.786553 0.215525 24 16 1786 1770 294.973 372 0.0104925 0.213176 25 16 1857 1841 294.533 284 0.0103907 0.213799 26 16 1923 1907 293.358 264 0.839383 0.215008 27 16 1999 1983 293.751 304 0.150419 0.214408 28 16 2071 2055 293.545 288 0.237225 0.214833 29 16 2145 2129 293.628 296 0.702958 0.215068 30 16 2219 2203 293.707 296 0.579406 0.215398 31 16 2293 2277 293.78 296 0.694577 0.215467 32 15 2381 2366 295.723 356 0.016919 0.214095 33 16 2453 2437 295.367 284 0.00186844 0.213752 34 16 2524 2508 295.032 284 0.868729 0.214188 35 16 2636 2620 299.401 448 0.00859474 0.211306 36 16 2708 2692 299.084 288 0.0104791 0.211509 37 16 2769 2753 297.594 244 0.739833 0.212777 38 16 2854 2838 298.71 340 0.00154313 0.211864 39 16 2914 2898 297.204 240 0.161037 0.212684 2019-03-29 15:18:33.778449 min lat: 0.00111935 max lat: 0.918145 avg lat: 0.212591 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 40 16 2995 2979 297.873 324 0.0013718 0.212591 41 16 3068 3052 297.729 292 0.00146886 0.212668 42 16 3138 3122 297.307 280 0.736173 0.212903 43 16 3210 3194 297.089 288 0.772264 0.213191 44 16 3295 3279 298.064 340 0.0104231 0.212631 45 16 3357 3341 296.951 248 0.0113773 0.213176 46 16 3435 3419 297.277 312 0.0109177 0.21351 47 16 3520 3504 298.185 340 0.00123782 0.212441 48 16 3590 3574 297.806 280 0.0112383 0.212681 49 16 3670 3654 298.258 320 0.0151619 0.212571 50 16 3726 3710 296.773 224 0.660348 0.213871 51 15 3801 3786 296.914 304 0.0103053 0.213435 52 16 3886 3870 297.665 336 0.0147818 0.213142 53 16 3974 3958 298.69 352 0.00144378 0.212345 54 16 4038 4022 297.899 256 0.109662 0.212624 55 16 4095 4079 296.627 228 0.714336 0.213769 56 16 4160 4144 295.973 260 0.645317 0.214595 57 16 4224 4208 295.271 256 0.01048 0.214832 58 15 4291 4276 294.87 272 0.0104571 0.215162 59 16 4396 4380 296.922 416 0.575663 0.21401 2019-03-29 15:18:53.780272 min lat: 0.00111935 max lat: 0.965153 avg lat: 0.214156 sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s) 60 16 4461 4445 296.307 260 0.146428 0.214156 Total time run: 60.584579 Total reads made: 4462 Read size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 294.596 Average IOPS: 73 Stddev IOPS: 11 Max IOPS: 112 Min IOPS: 54 Average Latency(s): 0.216632 Max latency(s): 1.42983 Min latency(s): 0.00111935 |
Le run-name est donc inutile
edit: par contre je capte pas pourquoi mon poolceph de base écrit plus vite
Total time run: 60.853869 Total writes made: 1236 Write size: 4194304 Object size: 4194304 Bandwidth (MB/sec): 81.2438 Stddev Bandwidth: 10.2707 Max bandwidth (MB/sec): 104 Min bandwidth (MB/sec): 56 Average IOPS: 20 Stddev IOPS: 2 Max IOPS: 26 Min IOPS: 14 Average Latency(s): 0.7859 Stddev Latency(s): 0.460313 Max latency(s): 2.3258 Min latency(s): 0.1532 Cleaning up (deleting benchmark objects) Removed 1236 objects Clean up completed and total clean up time :0.406199
|
Donc il faudrait p'tet tester avant d'avoir un pool monté/utilisé ?
Pour supprimer votre pool
WARNING: This will PERMANENTLY DESTROY an entire pool of objects with no way back. To confirm, pass the pool to remove twice, followed by --yes-i-really-really-mean-it |
donc
rmpool xatbench xatbench --yes-i-really-really-mean-it |
Pour comparaison un hdparm -tT basique :
root@xat-pve0:~# hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 34676 MB in 1.99 seconds = 17413.45 MB/sec Timing buffered disk reads: 1422 MB in 3.00 seconds = 473.47 MB/sec |
J'ai rien tuné, j'ai qu'une interface, mon MTU est de 1500, etc. Donc je m'attends à mieux, on verra après le tunnage
Message édité par XaTriX le 29-03-2019 à 15:27:16
---------------
"Xat le punk à chien facho raciste. C'est complexe comme personnage." caudacien 05/10/2020