Free online color matching tool for your maps

 Maps do convey relevant information to a target audience.  Maps, apart from being informative, should be artistic too. Cartography is art. I mean you cannot just combine your violets with the greens or blues with it. A right combination of RGB or CMYK swatches will make it appear more pleasing and more eye-catching. And if you have 12 classes in a thematic map, you wouldn't want it to be randomly picked as if making some sort of lantern decoration for the Christmas season. Color coordination really is the name of the game.

My husband who's into creative design is my one true critic when it comes to color combination. He would just peek into my final map for a project or laboratory exercise and then tell me to use this color or another color theme instead of my initial picks. He was such a relief when I am finishing my maps when he's at home. But when  he's not around, hard part comes.

Anyways, I have stumbled upon this book last semester and found it really useful. Great tips and illustration for beginner to intermediate level cartographer.
GIS Cartography: A Guide to Effective Map Design

Also there are free online matching tools that will surely be of big help to us.

Color brewer which was made specifically for cartographic purposes is a great tool for picking just the right color sets for our thematic maps. Color schemes for the color blind option is really great. It also considers the type of data present in your map and the map context. The best feature is an embedded score card which indicates if your output is colorblind friendly, color printing friendly, photocopy friendly and/or LCD friendly.

COLOR BREWER : http://colorbrewer2.org/
Another color matching tool is Color Blender. It's not actually focused on cartography but more of color scheme guide for just about anything such as logo design. web site or print design. It already has a gallery of blends from which you can choose from. The option for downloading or sending blend  along with its active color function works just as great.

COLOR BLENDER: http://www.colorblender.com/

Hope to enjoy mix and matching with such array of color schemes in my upcoming maps.


Free PDF to WORD options

Latest versions of Microsoft Office and OpenOffice are already capable of  saving your  documents from doc, docx or odt to  pdf format. The reverse is quite unusual.

There are trial versions of pdf to word converter with their limitations of course. PDF2Word can only be used 100 times and allows the first 5 pages only to be converted to .doc format. Fonts change and the layout was a bit different after conversion. 

The most useful for me is using the OpenOffice PDF Import Extension. If you have OpenOffice installed in your pc, just double click the downloaded extension and an installation dialog will guide you through. 

The pdf opens in OpenOfficeDraw and lets you do basic editing of text and images in the document.

You can save the edited file as another pdf or export as an image file or in html format. 

It's free and it's open source so let's see more developments of this extension on OpenOffice. It would be nicer if pdf files can be opened in OpenOffice.org Writer and saved in odf.

Free Class Schedule Notification on your Mobile Phone using Google Calendar

At long last, after one long week, I am finally enrolled for the 2nd sem AY 2010-2011. The entire enrollment experience has been really tedious and cumbersome because of usual queues, considering the enrollment is already online and automated.

Anyways, I am now officially enrolled, so I would just have to tidy things up in my calendar. I set-up my Google Calendar to send me notifications of my scheduled class for the night on my mobile phone. The notifications are free so why not!It pays to be organized for some time anyways.

Here's my schedule for this sem. And some other events/notifications on my Google Calendar.

I created a test event for today and scheduled it to arrive 10 minutes before the time. Voila! I received a text message from Google saying:

Reminder:test @ Thu Nov 11 12:20pm-1:50pm in PIVS (melanie's google calendar).

Memorizing class schedule and room assignments have been quite a mess during my undergrad years. I used to print it and paste it on my notebook way back then. But it would be better if I'd receive some electronic notification, for this case text message from my friend Google.

Well here's how I did it.

1. Set-up your mobile number on your Google Calendar. Its on the Google Calendar>Settings>Calendar Settings>Mobile Set up.
2. Save settings then validate it later using the code sent on your mobile phone.
3. Create a new calendar for your subjects this sem.

4. On your newly created calendar, encode your subjects, building and room number, and the time of course using the Create event option in the calendar.

5. Specify if it is twice or once a week by checking the repeat button below the event title text box.

5. Set-up the SMS notification on the subject/event. During my undergrad years, when my subjects are overloaded, we are allowed to go out of class 15 minutes before the time, that's the time allowance for walking or taking an IKOT/TOKI ride for our next class. So I guess a notification of maybe 20 minutes will do.

6. Save settings and you're done.

Hoping for an exciting, challenging and productive semester ahead of us!


Image Registration and Georeferencing in ENVI (Cavite, Philippines)

Georeferencing and Registration Methodology
The following steps were undertaken for the rectification or georeferencing of the image:

1. I have created a subset of the province of Cavite from the Landsat ETM+image that I have downloaded from GLCF . A topographic map of the equivalent subset has been obtained with the following basemap information.

Fig. 1: Basemap information

Scale 1: 50000..
Sheet name Cavite, Philippines
Sheet number 3163 II

2.  A scanned 1:50000 topographic map of Cavite appears below.
Fig. 2a: 1:50000 Topographic map of Cavite
3.  An Image-to-map registration was performed using the coordinates on the boundary of the map as ground control points. A total of 10 points are obtained with a total RMSE of 0.396853.

4.  An Image-to-Image registration was later performed, yielding 9 GCPs with a total RMS error of 0.970929. The Landsat ETM+ image that I have obtained has already undergone the Level 1G correction hence the warping from the originally downloaded image to the warped image was observed.
Fig. 3a: Distribution of ground control points within the Landsat ETM+ image of Cavite

Fig. 3b: Georeferenced image of Cavite, Philippines

Some Discussions

On topographic maps vs. satellite images
Topographic maps are already orthorectified hence it can be used to measure distances between two points. Name of features are also available on a topographic map. Depending on the level of correction contained on the satellite image, it may or may not be georeferenced or orthorectified. Satellite images show terrain of ground features. Moreover, satellite images are captured at different wavelength bands in the electromagnetic spectrum hence analysis and interpretation can be performed on more than three dimensions. Depending on the field of application or query, one can switch on different color composite diplays on the different bands contained in the satellite images to facilitate interpretation. Scanned topographic maps on the other hand utilizes only the visual RGB bands. Topographic maps are available on different scales depending on the area needed for interpretation. Satellite images, depending on the capabilities of the sensor, are available in different radiometric, spectral and spatial resolutions. One satellite image covers a very large area unlike topographic maps which contains relatively smaller areas hence producing finer details and information on features in it.

On the use of topographic maps for geometric rectification
Use of topographic maps poses some potential problems when used for geometric rectification. Depending on the date of execution of the topographic survey used for the map, information contained in it whether man-made or natural fatures may have already been altered through time. This is crucial for the identification and location of ground control points. Medium used for printing the topographic map is also a factor to be considered. Different materials expand and contract as the temperature of its environment changes. This expansion or contraction causes significant distortions on the features printed in the map. Paper when folded or crampled also causes geometric distortion of the features on the map. When scanned, printer resolution and other abberations caused by the glass on the scanner may also cause geometric distortion on the basemap. Lastly, when ground control points (GCP) are chosen, the interpolation of the corresponding coordinates of the GCPs may also be subjective depending on the discretion of the observer which may cause significant difference on the inetrpolated versus the true ground coordinates of the GCP.

On coordinate transformation
The RMS error measures the errors between the destination control points and the transformed locations of the source control points. A root mean square error is calculated for each transformation performed and indicates how good the derived transformation is. The transformation is derived using least squares, so more GCPs can be given than are necessary. Specifying a minimum of three GCPs are required to produce a transformation that results in an RMS error. The formula for the calculation of RMSE appears below.

To come up with an acceptable degree of spatial accuracy, RMSE should not be greater than 1.

Depending on the method of transformation, the number of minimum number of coordinates needed for transformation may vary. For 2-dimensional conformal coordinate transformation, which preserves the orthogonality of the coordiante systems, a minimum of 2 control points for scale change, rotation and translation are needed for transformation. On the other hand, 2D-Affine transformation which preserves the parallelism of parallel lines, 6 parameters are needed to facilitate transformation hence a minimum of three control points to produce 6 equations with 6 unknowns.

Computation for the parameters of each transformation method.

Using the following formula for conformal and affine transformation, a JAVA program was produced to compute the parameters for transformation.
    Fig 4a: Working equations for the formulation of matrices for 2D Conformal and Affine Transformations
The input file contains the coordinates from the GCP text file of Image-to-Image registration in ENVI. The output file produces the parameters for transformation.

The following table are the input coordinates in solving for the parameters for transformation.

Fig. 4b : Base and warp image coordinates as ground control points
Base Image Warp image
x y x y
2197 467 4288 3435.75
2101 915.25 4255.75 3581.5
1406.25 1293 4027 3709
757.5 1964.5 3806.25 3933.5
377 256.5 3705.16 3376.46
2831.5 1920 4491.25 3914
2869.25 638 4504.5 3487.25
1804 1168.25 4156.94 3667.62
1797.75 797.75 4157.56 3545.68

The following tables contain the resultant parameters for 2D Affine and and 2D Conformal Coordinate Transformation respectively.

Fig. 4c : Parameters for 2D Affine transformation
Parameter Value
a0 3579.92167
a1 0.32503
a2 -0.00779
b0 3291.26455
b1 -0.00444
b2 0.32919

Fig. 4d : Parameters for 2D Conformal transformation
Parameter Value
a1 0.32605
a2 -0.00021
a3 3569.72659
a4 3286.96612

On different transformation and resampling methods in ENVI
Upon application of the three different transformation methods , the resulting images are warped differently. The figure below compares the amount of warp present in the image after each transformation using different resampling methods.

    Fig. 5: Amounts of warping in the Cavite satellite image using RST, 2nd and 3rd order polynomial transformation
    After geometric corrections and translations, resampling is being performed to produce a better estimate of the DN values for individual pixels. In the nearest neighbor algorithm, the transformed pixel takes the value of the closest pixel in the pre-shifted array. In the bilinear interpolation, the average of the DN values of 4 surrounding pixels is used while cubic convolution averages the 16 closest input pixels.
    Images resampled using cubic convolution produces the sharpest image.


Hold-out validation (HOV) uses another set of GCPs for the same image dataset to verify the spatial accuracy of the georeferenced image.

    Fig. 6a: Hold-out Validation using a new set of ground control points of the Cavite Landsat ETM+ Image
Shown above are the relative loaction and ditribution of the new set of ground control points. Nine (9) GCPs which yield a total RMS error of 0.448321.

Leave-one-out Cross-Validation (LOOCV) - Cross-validation uses all of the data to estimate the trend and autocorrelation models. It removes each data location, one at a time, and predicts the associated data value.

Fig.6b : Sample Leave-one-out Cross Validation
The figure above shows how LOOCV works. After choosing the GCPs and minimizing the total RMSE to less than 1, the Image to Image GCP list was arranged such that the point with the largest RMS error appears on top of the list. This point was turned off and effectively the total RMS error of the GCPs lowered from 0.970929 to 0.818812. If we hit the predict button in the Ground Control Points Selection dialog,the cross hair on the zoom window will center on the point that will give the lowest total RMS based on the correlation of points of the image.

On Level 1G correction of Landsat images 
Upon rectification, distortions caused by platform and surface geometric characteristics can't be easily distingushed since the image obtained has already undergone level1G correction which is a format created by NASA to indicate imagery that is basically ready to use .L1G" is indicative of "Level 1G", meaning the data has been processed to level 1 and is radiometrically and geometrically corrected. 

On Image-to-image vs. Image-to-map registration
An image to image registration is a lot easier than image to map registration. You just have to scan the topographic map and georeference it using the graticule values given on the borders of the map. Bias on the map-scaling interpolation is removed because once you georeference the image, the coordinates on the pixel of the desired GCP within the georeferenced image is readily available. The drawback however is on the manner of scanning the topographic map. The resolution of the scanner that will produce the output topo map as well as the current state of topo map upon scanning greatly affects the quality of data one can extract in the scanned map. Folds and cramples or obliterations on the map produces significant distortions on the coordiantes derived from the map.


    How to change Google Chrome language from Tagalog to English

    Yay! It was my first time to install Google Chrome on my pc. I've had a hard time figuring out  how to change the default language settings which is in Filipino to English (US). The translation was a bit awkward though it's grammatically correct. But you see some English (especially the technical terms) just can't have a "decent" Filipino language counterpart.

    The tab "Sa ilalim ng hood" was quite obvious to be "Under the hood", but it just gave me a serious laugh. That phrase doesn't make sense at all on standalone conversations. hahaha...

    Anyways I documented how I did it. Just for fun. It was still an achievement of the day for me .(^_^)

    Principal Components Analysis

     Principal Component Analysis (PCA) aims to eliminate the interband correlation and the effective dimensionality of the data or image.

    The false color composite of the original Laguna.img file appears below.
    Fig. 1: Standard color infrared composite image
    The satellite image from Google Earth of Los Banos, Laguna was also included for identification of features and their names. Fig. 1a shows the saved image.

    Fig.1a: Google Earth image of Los Banos, Laguna
    After subjecting the original image to principal components analysis, a color composite using PC1,PC2,PC3 as R,G,B respectively was produced. The color composite image was shown in the figure that follows.

    Fig. 2: RGB = PC1,PC2,PC3 of Laguna.img
    The six principal component grayscale images appear below.

    Fig. 3: Output principal component images

     The tonal patterns of the principal component image 1 differ with that of ETM+ band 1 of Laguna.img. Figure 4 shows the a generlized view of the said differences on the tonal patterns for principal component 1 and band 1. The waters appear as dark pixels on PC1 and bright pixels on the ETM+ Band 1. Higher contrast between land and water features was emphasized on PC1. Urban or built up areas appear brighter in PC1. The roads can't easily be delineated in PC1 than that of band 1 where road networks appear as bright pixels in contrast with vegetation and built up areas as darker pixels in band 1. Discrimination between different types of vegetation can be easily performed in PC1.

    Fig. 4: Principal component image 1 and ETM+ band 1 grayscale images of Laguna.img

     The tonal patterns of the Principal component image 2 differ with that of ETM+ band 3 of Laguna.img. Vegetation appears as bright pixels in PC2 image while Band 3 image shows otherwise. Water and urban or soil are less distinguishable in the PC2 than Band 3 image. Coastal features however are more recognizable in the PC2 because of increased contrast for water and features in it. The dendritic pattern of the slopes of Mt.Makiling is more obvious on the Band 3 image.

      Fig. 5: Principal component image 2 and ETM+ band 3 grayscale images of Laguna.img
    The primary purpose of principal components analysis is to reduce dimensionality of the image by eliminating interband correlation while still maintaining features in the image. The produced principal component images can be better used for classification than the standard composite infrared image because the variances of the data has been maximized for the first four output principal component images. The fifth and sixth principal component image however shows less information and vague features because of increased noise.  
    There are some variances in the first, second and third component of the PCA result. A screen capture of the generated statistics file of the PCA appears below.Variance on the obtained PCA images was computed by dividing the eigenvalue of each component to the sum of the eigenvalues of the six compnents. Figure 6 shows the derived percentage variance of each principal component image based on the eigenvalues obtained from Fig.6a. From Fig 6 it can be seen that the maximum variance of more than 60% of the total Eigenvalues in the image is contained in PC1. This means that the output principal components image is highly uncorrelated. Conversely, PC6 has the highest correlation among the images from which .11% variance has been computed.  
    Fig.6: Percentage of variance of each principal component image based on the eigenvalues obtained from Fig. 6a
    1. PCA Eigenvalue Variance
      1 1655.8 0.6027
      2 952.98 0.3469
      3 108.61 0.0395
      4 16.75 0.0061
      5 10.05 0.0037
      6 2.99 0.0011
    Fig. 6a: Generated statistics file for the principal components analysis

    Fig.7: Generated plots for the general statistics, standard deviation and eigenvalues for the principal component analysis of Laguna.img
    There are noticeable "red" patches in the PC1,PC2 and PC3 composite as RGB. Figure 8 shows the composite principal components image (below) and the false color infrared composite image (above). From Fig. 8, the visible red patches in the principal compnents composite image appears to be a non-vegetated area and is of high reflectance in the NIR region. Based on Fig.8b, the red patches must have been concrete materials which shows highest reflectance on the NIR region amongst the other materials spectral signature plots.
    1. Fig. 8: PC1,PC2 and PC3 composite(top) and false color infrared image RGB 432(bottom)

      Fig. 8b: Spectral Signatures of Non-vegetated areas
      Source: Remote Sensing Tutorial Introduction - Part 2 Page 6
    2. PC6 image has a very granular texture as compared with that of PC1 and PC2. Fig 3 shows how the noise or granularity of the image increases as the number of PC increases.. The last two primarily contained "unexplained," "noisy" elements, like PC5 and PC6. They appeared as "stripes" running diagonally from the upper left to the lower right corner on the images Fig3. These stripes were represented as spatially correlated, structured noise. Since PC1 and PC2 are produced from highly uncorrelated variables as explained by their high variance from Fig.6, the first two principal component images are free from these noises that causes the image to have a granular texture.

      The factor scores and factor loadings of each output grayscale image of the principal components analysis appears in Fig. 9.
      Fig.9: Eigenvectors and degree of correlation for each component


      Factor scores

      Factor loading

      Band 1 Band 2 Band 3 Band 4 Band 5 Band 7
      1 -0.05 -0.37 -0.31 0 -0.64 -0.6 lowest correlated bands 4-1 (-0.647261) – stable features
      2 -0.05 -0.42 -0.44 0.19 -0.24 0.73 2nd lowest correlation – bands 4-2 (-0.607767)
      3 0.06 -0.59 -0.28 -0.04 0.71 -0.26 3rd lowest correlated bands 3-4 (-0.508561)
      4 0.38 0.5 -0.73 -0.28 0.07 -0.03 4th lowest correlated bands 5-1 (-0.141014)
      5 0.77 -0.04 0.13 0.62 -0.04 -0.07 5th lowest correlated bands 5-2 (-0.115757) - noise
      6 0.51 -0.31 0.3 -0.7 -0.17 0.17 6th lowest correlated bands 3-5 (0.161244) – noise

      On the premise of locating features and the stream patterns within the slopes of Mt. Makiling, the composite images 1,3 and 4 exhibits the most number of features for classification. PC2 doesn't show much detail on the slopes of Mt. Makiling. PC5 and PC6 contains noise hence features can't be easily derived from both components. Composite image RGB-PC1,PC3,PC4 serves such purpose. Fig.10 shows the individual grayscale images of PC1, PC3 and PC4 above along with the RGB composite image on the bottom of the figure.

      Fig.10: PC1,PC3,PC4 grayscale images and its RGB composite