javascript - Measure WebGL texture load in ms -


how can measure webgl texture load in milliseconds?

right have array of images renderd out map using game loop , im interested in capturing time takes webgl load every texture image in milliseconds. wonder how can done measure because javascript not synchronous webgl.

the way measure timing in webgl figure out how work can in amount of time. pick target speed, 30fps, use requestanimationframe, keep increasing work until you're on target.

var targetspeed  = 1/30; var amountofwork = 1;  var = 0; function test(time) {    time *= 0.001;  // because seconds      var deltatime = time - then;    = time;     if (deltatime < targettime) {      amountofwork += 1;    }     (var ii = 0; ii < amountofwork; ++ii) {      dowork();    }     requestanimationframe(test); } requestanimationframe(test); 

it's not quite simple because browsers, @ least in experience, don't seem give stable timing frames.

caveats

  1. don't assume requestanimationframe @ 60fps.

    there plenty of devices run faster (vr) or slower (low-end hd-dpi monitors).

  2. don't measure time start emitting commands until time stop

    measure time since last requestanimationframe. webgl inserts commands buffer. commands execute in driver possibly in process

    var start = performance.now;         // wrong! gl.somecommand(...);                 // wrong! gl.flush(...);                       // wrong! var time = performance.now - start;  // wrong! 
  3. actually use resource.

    many resources lazily initialized uploading resource not using not give accurate measurement. you'll need draw each texture upload. of course make small 1 pixel 1 triangle draw, simple shader. shader must access resource otherwise driver not lazy initialization.

  4. don't assume different types/sizes of textures have proportional changes in speed.

    drivers different things. example gpus might not support rgba textures. if upload luminance texture driver expand rgba. so, if timed using rgba textures , assumed luminance texture of same dimensions upload 4x fast you'd wrong

    similarly don't assume different size textures upload @ speed proportional sizes. internal buffers of drivers , other limits mean difference sizes might take differnent paths.

    in other words can't assume 1024x1024 texture upload 4x slow 512x512 texture.

  5. be aware won't promise real-world results

    by mean example if you're on tiled hardware (iphone example) way gpu works gather of drawing commands, separate them tiles, cull draw invisible , draw what's left desktop gpus draw every pixel of every triangle.

    because tiled gpu @ end means if keep uploading data same texture , draw between each upload have keep copies of textures until draws. internally there might point @ flushes , draws has before buffering again.

    even desktop driver wants pipeline uploads upload contents texture b, draw, upload new contents texture b, draw. if driver in middle of doing first drawing doesn't want wait gpu can replace contents. rather wants upload new contents somewhere else not being used , when can point texture new contents.

    in normal use isn't problem because no 1 uploads tons of textures time. @ upload 1 or 2 video frames or 1 or 2 procedurally generated textures. when you're benchmarking you're stressing driver , making things won't doing normally. in example above might assume texture unlikely uploaded 10000 times frame you'll hit limit has freeze pipeline until of queued textures drawn. freeze make result appear slower you'd in normal use cases.

    the point being might benchmark , told takes 5ms upload texture in truth takes 3ms, stalled pipeline many times outside benchmark unlikely happen.


Popular posts from this blog

c# - ODP.NET Oracle.ManagedDataAccess causes ORA-12537 network session end of file -

matlab - Compression and Decompression of ECG Signal using HUFFMAN ALGORITHM -

utf 8 - split utf-8 string into bytes in python -