Friday, November 28, 2008

Webcast survey brainstorm

Question categories
  1. About the user: Location, Age, ....
  2. Getting start with webcast: How did you find webcast....
  3. How do you use webcast: where...
  4. What do you think of webcast: speed, sound quality, ...
  5. Other comments
Candidate questions

About you
  1. Which country are you usually in when listening to the webcast?
  2. How old are you?
  3. Are you (a McGill students, faculty, or staff, or alumni, family or friend of a performer in volved in the concerts being webcast by us)?
Getting started with the webcast, as a user:
  1. How did you hear about our webcast?
  2. Did you find the webcast service user-friendly?
  3. Are you satisfied with the network connection and streaming speed?
  4. Are you satisfied with the sound quality of our webcast? If not, give what is in your mind.
  5. Did you like the concerts that we selected?
  6. The concerts being webcast are all by McGill musicians. Which specific McGill musicians would you like to be webcast?
  7. Did you like the hosting, interviewing, and other non-music content in some programs? Did they help you gain a better experience of the program?
  8. In some programs there was not much hosting and interviewing, would you like us to add more?
Getting started with the webcast, as a non-user:
  1. If you have not heard about the webcast, would you be interested in trying it? Why?
  2. If not, why?
Publicity and on-demand service
  1. Via which of the following ways do you prefer to receive our webcast announcement?
  2. If you are interested in a concert, do you prefer listening to the on-site webcast or downloading concert recording later?
  3. If the recording of a concert being webcast will be available for downloading later, would you still like to listen to the webcast?
  4. What media format and quality specification do you prefer as downloadable audio?
Other comments
  1. How do you like our webcast in general?
  2. Please tell us what could be done to improve our webcast service.

Tuesday, November 25, 2008

Plans

What do we have now
  1. code, except for RIAA EQ
  2. no doc
  3. no presentation
  4. no report
Work to do
  1. documentation, report, presentation.
  2. test existing code, may need more fitting
  3. scanning two albums (jazz, classical)
  4. coding on my personal software: tagger, breakbeat
  5. chuck
Readings
  1. microsound
  2. art of listening
  3. linux tutorials
  4. computer networks

Other activities
  1. prepare CVs for interns
  2. start translating
  3. canadian news
  4. french
What should stop
  1. unrelated news: like the chinese ones.
  2. kanichi should definitely stop except for exercise.

Monday, November 24, 2008

resample for windows

Problem
  1. each window should start fitting at an azimuth that is consistently spaced from the prev window.
  2. this spacing is determined by samplerate and initial azimuth in Win(1).
  3. so Win(i) should always know where Win(1).va(1) is.
Solution
  1. Win(1).va(1) should be saved somewhere.
  2. Condition 1: Win(i).a_start = Win(1).va(1) + N * a_per_samp.
  3. Condition 2: Win(i).a_start >= Win(i).va(1) + LEN_WIN

Sunday, November 23, 2008

need to keep the disc rect coords even after unwrapping

used for pairing in bottom-1D-to-2D

window-based calibration

  1. Resulting 3 threads from stitching and unwrapping are not calibrated, i.e., each window of a different thread will have different
  2. inner/outer edge + bottom of Win(i) need to be calibrated so that they all start/end at the comparable azimuth.

calibrate edge windows

Problem
  1. Later fitting needs to apply to window instead of entire main thread
  2. final stitching requires no overlap in the fitted data
  3. can we leave the window as it is, fit each window, and merge them afterwards?
  4. NO, it's different from fit the entire main thread; the data used to fit Win(i+1), thought having an ov-FOV, is not necessarily the same as those used for Win(i).
  5. The last sample to be interpolated in Win(i), has to use the following WIN_HALF samples, so the last sample is not the last sample in the tail-FOV of the win; same for the 1st sample of Win(i+1).
  6. WIN_HALF could change depending on our tests.
Solution
  1. All windows have to be calibrated, based on previous window's calibration.
  2. we don't throw away window data, we just define interpolation boundaries, based on fitting window size.

Saturday, November 22, 2008

phase unwrapping in win

two conditions
  1. if Win(i)'s last FOV has gap, then convert the gap to the Win(i+1), and apply phase unwrapping to all azimuths > aa_gap
  2. if Win(i).FOV is not the last FOV of the window, then apply normal unwrapping to all Win(i+k). 
NOTE: no two gaps in the same Win

Friday, November 21, 2008

multi-piece hard windowing vs. phase unwrapping vs. resampling

Problem
  1. no way to apply simple phase unwrapping to main thread, because it's broken into individual files, and has to be loaded one at a time.
  2. accumulated 2*pi unwrapping
Analysis
  1. when stitching, we save data based on FOV count; have 1 FOV overlap
  2. after stitching, do a thorough truncating for all window files, so that the window-based resampling can start at an integer multiple of sampling period, and consecutive window file will have no resampling overlap.
    1. Prev window-file 1: truncate at the 1st resample node (RN) bound of the ov-FOV. NOTE: resample pos is an interpolation point, does not exist in data, so we need a true data bound, i.e., the lowest point that's greater than the resample window that bounds the RN.
    2. Suc window-file 2: truncate at the 2nd RN bound of the ov-FOV

  3. when resampling, we resample each window file.
  4. window file should be larger than a processing window for polyfit.
  5. when final concatenating, do beginning-alignment, discard overlap.
  6. for unwrapping,  data needed: i_window_file; count_revolution;
  7. for resampling, data needed: sample rate/period in angles, resample window-size

hard windowing

we can't just load main thread all the time.
from the hard stitching on, there has to be a hard windowing technique to save
the main thread back to pieces in favor of later audio resampling.

only after resampling can we FINALLY stitch all data into single main thread and make it sound.

stitch overlap simplified

Don't have to average between ov-FOVs, just pick our next focus-FOV and truncate (for top edge) or merge (for bottom 2D band) the suc-FOV and the main thread

Monday, November 17, 2008

Tom's Visit

  1. Reinstalling Vision solved the init problem
  2. Bottom improvement: hi-mag filter in both intensity window and the option dialog improves the scanning a lot!
  3. Auto-scan stops within a specified scanning depth when obtained-pixel reaches a ratio of the FOV.
  4. Auto-scan generates very noisy data.
  5. 5X * 2 does not equal 10X; the 2X factor is a magnification of the FOV in 2D; it does not improve the optical capability at all.
  6. Hardware: loosening 4 screws + tuning 2 handles against the PSI mirror to make sure the center fringe does not move when tiling or focusing
  7. Hardware: tuning another two handles at the top against the stitching result aligns the stitching.
  8. Fiducials can be used to calibrate drift
  9. Post-processing with just tilt-only
  10. Getting bottom by tilting is not recommended, because the recovered data could be from the sidewall as opposed to real bottom

Vision installation

  1. download 4 files
  2. install from install.exe in 775-296_(Vision_v3.60).zip, all default.
  3. then 775-297
  4. then 775-298
  5. then v3.6cu5
  6. copy config , kikaida.cfg, wyko.ini, in config/ folder; devices/services.cfg

Sunday, November 16, 2008

detect attached-CC: bottom-based improved

  1. bottoms are pre-tagged with CC containers
  2. for each CC, get the bottoms segs and chain them up
  3. NOTE: sum(span_a_btm) > span_cc does not always work, when bottom is seriously broken.

Saturday, November 15, 2008

CC containers

theory
  1. a CC can have only one container CC.

Thursday, November 13, 2008

detach: when to calc bottom CC-ownership?

bottom ownership can be calced by line-equation
  1. in mi_btm, select the point at median_a of a bottom, get line-equation of it
  2. in mi_cc, get all points on the line, find the radially closest point to the median_a btm point, the "cc" is the owner.

Final: right way of folder structure

Problem
  1. for each subprocess, the cpd_load contains different content, and the cellarray size is different for each process. 
  2. loading process is always non-linear and requires specific loading folders.
Solution
  1. if the all folders stay ordered in the cpd_load, then when trying to load stuff, we can trace back to the previous stage and in each folder, there are only updated FOVs.

retach/detach: folder version control or not?

Problem
  1. Counter-intuitive settings in folder structure.
  2. Not fully tested, so don't know what quirks it may give.
Argument
  1. Why introducing folder version control? 
  2. Cons: files are scattered into different folders and hard for the successive processes to load the folders.
Solution
  1. For each FOV, if no .dat exists in attach/detach folder, first copy from filter_blob, then start from attach with overwriting.

Tuesday, November 11, 2008

Workflow recorganization in favor of sting removal

Problem
  1. Sting removal involves more pre-processing than expected,
  2. it needs to be broken down into more parent processes
Solution: new workflow
  1. edge detection
  2. clear edge angular redundancy
  3. bottom fix (an optional set of bottom data, in addition to the original raw edge):
  4. sting removal
Detail
  1. what's the diff b/w lateral (r-wise) and vertical (z-wise) data?
  2. from the fig., lateral top edges imply the linearity of the bottom seg in 2D.
  3. however, since top edges are flat, there is no indication of the undulation of the bottom from viewing the top.
  4. if only we can trust the width and use w/z ratio to fix it...., but for sting removal, we can't trust it.
  5. we won't trust the sting segs, but we can trust most segs along the groove.
  6. from angular redundancy test, we know roughly where the stings are.
  7. to think out of the box, if stings are gone, we can fix the z data by using w/z;
  8. before this can happen, we can't trust w/z as a condition but a question.
  9. so the bottom fix should take two steps,
  10. before sting removal, simply do lateral fix, i.e., much like groove retaching, retach the bottoms in the sense of CC angular link.
  11. after sting removal, and the main-thread construction, we fix the depth by further w/z-based method.

sting removal: main steps

  1. condition: no angular redundancy
  2. connect bottom segs: 1) if short gap, linear linkage; 2) if long gap, seg the gap and do multi-seg linkage.
  3. check w/z

Monday, November 10, 2008

Final: angular redundancy

  1. Has to be done.
  2. cannot be perfectly done.
  3. hard sampling is still by far the best method, as opposed to radius-line-distance based free clustering.
  4. usually there shouldn't be any missed out points, if we select the entired region bounded by the rough diff(va_edge) detection.
  5. some missed out points have to be fixed during the next step: sting removal based on w/z.

can't do the one-shot sting removal

  1. One shot requires correct op/ed angles, i.e., both on the edge thread not the sting thread, which is not guaranteed to be correct.
  2. worse, if on same edge, there are 2+ clusters, it's hard to determine the op/ed of each cluster and fix them one by one.

New idea: sting removal, one shot!

After finding the overly dense angular seg,
if we can determine that it is a sting, we flatten it by using the hole connection method directly.

what ruined hard angular-sampling?

In the effort of clearing angular redundancy, hard sampling tends to mis-cluster, i.e., some points on the stings are missed out and become self-contained clusters, so that they would stay alive after the sting removal.

The problem is the sting-thread points and the non-sting-thread point (real edge) have different angular spacing. So that an overly refined hard sampling could cut into a theoretical angular group and split the edge and the sting, yet we want them to stay together so that we can measure the sting against the edge in the same group. If the sting is missed out, then there will be nothing to remove.

angluar redundancy: point-to-line distance dilemma

Problem
  1. If the idea of "common radius" is defined by mutual distance among multi-multi evaluation, the overlapping nature of the rectangular pixel tesselation will make the common radius group grow excessively thick.
  2. meaning point-by-point + line equation based grouping is likely to cluster the whole set of points
Solution
  1. grouping should not work on the multi-multi assumption.
  2. at each azimuth, there should be a reference point, against which all the other points' p2l-distances are calculated.
  3. an angular group is defined by the innermost/outermost point on that common radius
  4. dilemma: hard-coded angular quantization tends to mis-cluster; soft-quantization by multi-multi free clustering clusters everything

Sunday, November 9, 2008

angular redundancy problem

Problem
  • Angular grouping methods, incl. diff histogram and angular frames, are not robust.
The proper solution
  • draw explicit radius line and take all points that fall on the line

various experience

  1. previous filtering didn't always apply z-update, which may create Inf/NaN problem when the 2D image looks alright.
  2. naming for assembler index containers should make the index-level explicit! always!
  3. constantly refactor repetitive small code blocks with function extraction

Saturday, November 8, 2008

Plan on sting removal

  1. use w/z ratio filter
  2. a process to verify multi-inner/bottom/outer units at the same azimuth. use the extreme points
  3. at depth-less portion, check width vs. mean_width, if, e.g., w<0.5*mean_w or w>2*mean_w.
  4. at offensive azimuth, r_new = r_mean, rc_new = rect_from_polar([r_new, a_old]), depth_new = depth_nearby, collect the offensive rect coords
  5. fill the new holes caused by revising edges.
  6. get new data into sotrage.

Friday, November 7, 2008

[sting removal] Proof of advantage of using w/z as oppose to w

std(w/z) = 0.0057 ~ 0.0124
std(w) = 1.6771 ~ 2.5518


Monday, November 3, 2008

edge detection: benchmark

sideband vs. old

16 sec vs. 30 sec.

edge detection: noise

before edge detection, there are small noise along the edge that are not removed by the intra-CC ho removal process.

However, these noises are connected to the holes, not stand-alone objects.
So better leave them alone for now.

pitfall of sideband method

this method supposed that the edge is perpendicular to the radius.
if the edge is slanted across the cross-section, then sidebands miss the middle area.

solution
  1. after creating sideband, check the rect-bbox of middle-band to see if it contains non-zero points (valley), if so, extend the sidebands to fullband.

advantage of CC analysis

Tag/index all objects in the FOV,
in later edge detection, this helps to use non-pixel edge detection method, more accurate than the canonical method.

Sunday, November 2, 2008

why ho-based and CC-based methods perform differently?

  • CC-based = Intra-groove: Because in CC, the outer/inner edge undulate in a parallel way
  • Ho-based = Inter-groove: Between CCs, undulation is arbitrary, making the groove spacing more versatile so that hard to make division by radius.

hole-based raw edge extraction

  1. calc ho / CC centroids
  2. sort hos and CCs radially
  3. for each ho, find its sandwich wrap CC; one of them could be missing, 
  4. fast windowing-based algo:
    1. angularly windowing a hole, based on a time param for the window size
    2. in each window, extract two side-bands and only do 8-neighbor edge search to the side-bands
    3. in low-level search, by checking existence of CC nb-pixel and the CC index, do inner/outer division

meditation: edge division

Problem
  1. hole-based edge detection has the advantage of finding the true depth-present edge(top edges)
  2. but the shape of the holes are more versatile in that there are inwards stings that cause problems in inner/outer edge division, e.g. no easy reference radius to look at even at a very narrow azimuth window.
  3. the wrong division (incorporate extraneous blobs) will cause a sting if we sample that azimuth and take only the outermost/innermost point.
Dilemma
  1. we want to interpolate based on a continuous edge array
  2. but the array contains wrong data (stings)
  3. to remove the wrong data, we need to do angular-sampling to evaluate the edge
  4. this sampling would be a downsampling and could cause blur.
Question
  1. do we wanna downsample at first to get the resampled-edge or thereafter?
Answer
  1. whichever helps us preserve the resolution....
  2. then it should be after the raw edge extracted.
  3. so during sting-removal, for normal edge data, we keep them all (more than sampled data), but only exclude detected stings among the sampled data.
Raw edge detection
  1. CC-based or hole-based?
  2. CC-based: simple and straightforward to implement but not physically reasonable, no real depth data at the lateral position, need interpolation.
  3. hole-based: physically reasonable, but hard to implement when it comes to inner/outer division, because more susceptible to CC-stings.
Solution
  1. CC-based: after raw edge detection, get depth from neighbor hole-points (radius extrapolation), need interpolation; ring shaped part needs to look at wider adjacency.
  2. Hole-based: after raw edge detection, do a CC-HO band division; for each hole edge point, look at its 8-neighbor and find the dominant CC index and assign it to the CC as either inner/outer edge depending on the band-division.(NO INTERPOLATION INVOLVED)

Saturday, November 1, 2008

inner/outer edge division (2)

Problem
  1. random division point is not robust, outer/inner distance is too variable
Solution
  1. what's robust is the fact that, the division points on outer edge and inner edge are farther apart than consecutive points within outer/inner group
Implementation
  1. sort the angular seg radially and take diff(vr),
  2. find the max diff(vr) and divide.

Inner/outer edge division

Problem
  1. old method: look at an edge point-of-interest; find its angular peers; see if among the peers, inner-more points exceed outer-more points compared to this POI, true-> this an outer point
  2. this one is too rough to handle situations where there are significant angular overlap among such angular groove, then the number stats will be messed up instead of clear-cut 1-on-1 inner/outer pair.
Solution
  1. angular analysis: still take angular groups; at each azimuth, find in the original top area a random non-edge point, all points outer-more are outer edge, and inner-more are inner edge.