Thursday, November 25, 2010

CodeIgniter and PHP Howto - Embedding images in Email

DOWNLOAD THE CODEIGNITER EMAIL EXTENSION HERE

I recently had an issue in a client's development which required images to be embedded in html email code (<img src='cid:embeddedImage' />), rather than referenced via a public url (i.e. <img src='http://www.mysite.com/myImage.png' />).





QUICK OVERVIEW - THE PUNCH LINE

I have written a CodeIgniter Library Extension to the basic Email class that facilitates the embedding of images in emails. You can download the code HERE.

To implement it, follow the instructions for Extending Native Libraries on the CodeIgniter website http://codeigniter.com/user_guide/general/creating_libraries.html

Finally, in the body of your email, use the following macro - making sure the class_id attribute matches the one used in the img src attribute (i.e. <img src='cid:my_image' />). An exmaple message body would be as follows


<html>
<body>
<img src="cid:my_image" />
</body>
</html>

// Macro in Windows
{embedded_image file=C:\\my_image.png class_id=my_image}{/embedded_image}

//Alternative Linux Macro
{embedded_image file=/var/my_image.png class_id=my_image}{/embedded_image}


And thats it - the library encodes the image file as a base64 string and embeds it in your email.

PROS AND CONS OF EMBEDDING IMAGES IN EMAIL:

PROS

- Images are immediately displayed when opening a message, rather than the client prompting the user to allow remote content... great for newsletters
- Emails are independent, once downloaded, they don't require a live connection to view in all their glory

CONS

- A little bit more coding and knowledge of email message format is required to get them to work if your framework does not support embedding images in email
- Some conjecture about spam filtering for bulk messages with embedded images. Some believe spam filtering is more pessimistic for emails with embedded images than not when sending to more than 100 addresses

HOW TO EMBED AN IMAGE

I'm not going to go too far into the coding of native php code to email with embedded images (unless I get requests) - so Ill give a brief overview and some references.

Like most things on the web, email message content can be defined in a series of envelopes. These envelopes are defined by content type. You can read about all of them here http://www.freesoft.org/CIE/RFC/1521/15.htm

One of the most useful images to exemplify the content structure of a html email can be found at http://www.phpeveryday.com/articles/PHP-Email-Using-Embedded-Images-in-HTML-Email-P113.html
Bit of a warning though - the article itself has buggy code and examples.

Grossly speaking - your email message if it supports html with embedded images, and a plain text alternative (for non html email clients) should render the following content in the email body



Content-Type: multipart/alternative; boundary="UNIQUE_ID_1"

--UNIQUE_ID_1
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

.... plain text alternative content for your html email....

--UNIQUE_ID_1
Content-Type: multipart/related;
 boundary="UNIQUE_ID_2"

--UNIQUE_ID_2
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable

--UNIQUE_ID_2
Content-Type: image/jpeg
 name='embedded_image.jpg'
Content-Transfer-Encoding: base64
Content-ID: <class_id_referenced_in_img_src>
Content-Disposition: inline;
 filename='embedded_image.jpg'

(Base64 encoded binary for the image)

--UNIQUE_ID_2-- 

--UNIQUE_ID_1--


NOTE on the above.

-- means open content type. Trailing -- means closing content type. When content type is 'multipart', you can define more than one content type under the same section identifier as you are either specifying alternative content, or related content. multipart/alternative content means one of the specified content sections is used. multipart/related content means that one content section references the other. We use the multipart/alternative content type to specify html, and plain text message alternatives. We use the multipart/related for relating html content with embedded image content.

Monday, November 22, 2010

noscript tag and google index

The noscript tag is indexed by google! I recently released work for a client which used the noscript tag to display the typical 'this site needs javascript' warning if javascript was disabled on the client's browser.... unfortunately I did this in the head of my page AND used a h3 element to title the warning segment....

As a result, most pages on the site were indexed as 'Javascript not enabled' as the title, and the site's metatagged description as the content. Not happy.

Best practice - if you are going to use the noscript tag for simple javascript warnings, use it at the bottom of the rendered page.... many sites do this (including StackOverflow). Might also help not to use any heading elements (i.e. h1,h2,h3) but rely on styling another element as a heading.

Sunday, November 07, 2010

Free DNS Servers: Free DNS Online

For the last few years of web development, I have had to find a couple of reliable free DNS online services to manage my, and client Domains. I thought I'd post up a few and rate my experience with them.

Free DNS Online - My Experiences


Zone Edit : A great free DNS online service, 100% private and 100% free. Accounts used to be able to serve up to 5 domains, however recently this has been restricted to 2. The User Interface / Administration feels a bit clunky and old school web - but pretty straight forward and easy to use.

FreeDNS : Another good free DNS online service, however you entries can be reviewed by other users. Other users can configure subdomains off your domain, however you can pipe this through an authorisation process (where you the administrator are emailed and confirm any config changes). Payment is required if you want to make your listing private.

DNSExit : An uncapped free DNS online service. You can have as many accounts as you require... you can even set up dynamic DNS which is nice. Very slow though - and I have some questions regarding reliability. Recently the service was brought down by a DDOS attack for a couple of days.... have a backup - secondary dns on an alternate name server if possible.

DynDns : Free Dynamic DNS online service. Been around for years - can't recommend it enough for publicly serving from your dev box. Great for limited serving (i.e. demoing sites during development). Have a range of other paid services - however, a bit of googling for these will uncover alternative free services.


Tuesday, November 02, 2010

Detect Popup Blocker: Popup Blocker Detection for Chrome + all browsers

Popup blocker detection for all major browsers including Chrome....





Chrome popup blocker detection is a little different to others in that Chrome returns a valid window object after calling window.open, even with popups being disabled. There are a heap of posts around all basically stating that to determine whether this window object has been blocked by Chrome, we need to test whether the innerHeight of the popup has been set to 0.

Friday, October 15, 2010

Google Analytics - multi domain named site tracking



Another quick blog... Recently I added analytics to a couple of sites with more than one domain name. Its easy to see how much of your traffic is from each hostname (Vistors->Network Properties->Hostnames), but
I wanted to be able to overlay how much traffic was arriving to the site from each domain (to see which one was most effectively being advertised).....

I was able to achieve this by creating a Custom Segment.

When first logging into the Google Analytics Dashboard for the desired site / account, just above the Date Range, there is the Advanced Segments drop down list. Clicking on that drop down I was able to create a new Advanced Segment.

In the Advanced Segment editor, I could expand the Content submenu from the Dimensions menu on the left, and drag the 'HOSTNAME' field into my segment.

I then set the Condition field to 'CONTAINS', then enter the base hostname of the additional domain I want to analyse separately (i.e. myseconddomain.com).

After naming and saving the segment, I can apply it via the Advanced Segments drop down in the dashboard and viola....

Friday, October 08, 2010

CodeIgniter - Supporting Multiple Domains in config["base_url"]

A very quick blog.....

Within the site config file (application_folder/config/config.php file), a base_url property is set. This is read by the base_url() method to generate server side redirections......

If you are creating a site which has more than domain name (ie. www.domain_one.com and www.domain_two.com), its probably a good idea to dynamically create this value in the config file. This way, the domain name is preserved when redirecting between pages.


//...config ...//
$config['base_url'] = 'http';
if (isset($_SERVER["HTTPS"]) && $_SERVER["HTTPS"] == "on") $config['base_url'] .= "s";

$config['base_url'] .= "://";

if ($_SERVER["SERVER_PORT"] != "80") $config['base_url'] .= $_SERVER["SERVER_NAME"].":".$_SERVER["SERVER_PORT"];
else $config['base_url'] .= $_SERVER["SERVER_NAME"];

$config["base_url"]."/";
//... config ...//

Wednesday, October 06, 2010

IE Cache and AJAX: Cache Busting Ajax requests

Yet another 'special case' caveat for Internet Explorer, the red headed step child of the browser family..... (sorry to any red headed step children who might be reading this - chalk it up to the savage injustices of life). I discovered that IE cache and Ajax requests are not the best of friends compared to how other browsers handle Ajax requests.



Recently I found that IE cached ajax requests in a CodeIgniter + ExtJS site. I was using url rewriting, so all GET params were encoded as URI segments...

eg. (http://host/controller/action/param1/param2)

The Problem:
Usually, I would use ExtJS inherent cache busting tools (Ext.Ajax.disableCaching - which normally defaults to true).... but due to the url rewriting, the ExtJS method caused issues. Query string values (?blah=value) are disallowed in my app due to the url rewriting, therefore EXTs native cache disabling does not work as it simply appends a uid to the querystring (?_dc=123453443343). This caused 'disallowed character' exceptions.

Furthermore - I couldn't simply add a random variable to the end of a request, as this could be misinterpreted as an actual parameter for actions with parameters with default values

eg. http://host/controller/action/param1/param2/no_cache01223213312

no_cache01223213312 could be misinterpreted as param3 in the following action:
public function action($param1, $param2, $param3 = "default value")
{
//..//
}


The Solution:
The Big Stick:
Whether you use an MVC framework, URL rewriting, the first thing you should consider is that on all Ajax actions, make sure the header 'pragma' is set to no-cache..... so in php - write the header somewhere before content is returned to the browser

header("Pragma: no-cache");

This can really suck as it blows away all your lovely server side cache introducing a potential performance bottleneck to your app, all because Dwayne Dibley is still browsing your site using IE.

The ExtJS (Javascript) way:
The Ext solution was at the page level to intercept all AJAX requests, and add a random POSTED variable to the parameter listing.


Ext.Ajax.disableCaching = false;
Ext.Ajax.addListener("beforerequest", function (conn, options ){
   if(!options.params) options.params = {};
   options.params.cacheBuster = Ext.id();
}, this);



This forces a server side request as the request is unique (thanks to the random post variable). It also allows me to free specify GET params in the rewritten url, as I am adding a POST variable to uniquify the request.

For generic javascript.... when calling the target url, simply append a generated dummy query string parameter (like a timestamp)

"http://myhost.com/myAjaxPage.php?_nocache=321313213445462"

Again, the same caveat applies to the pragma header - you would probably want to make this cache busting parameter conditional on browser type

Saturday, October 02, 2010

Extra Bucks Online

So in the past I have blogged a bit about finding an eBusiness which is quick to setup and get running. I'm pleased to announce that my latest attempt at this is entering its final proofing.....

ExtraBucks is now online and is in its final proofing stages. The official launch date will come soon.... Essentially it is an odd jobs and piece work bulletin board. Its designed for those who want to make a bit of extra work outside of business hours to link up with people who need the odd job done. All 100% free to use. Check it out - have a play and feel free to leave some comments!

Monday, August 09, 2010

Calling the php interpreter within web script

Recently, part of a development required me to execute shell script as part of a web request. The design was to kick start a long running php process (via shell script), which would run beyond the length of a standard web request. To do this, I used forking shell script and popen... (if using shared hosting, shell execution methods are generally disabled - check php.ini's disable_functions config value to verify).

Within my dev environment (Windows) I was able to execute the following with no issues:

popen("start /b php my_script.php my_args", "w");
Note: 'start /b' forks a windows process - making it run in the background



When executing its equivalent in a Linux environment
popen("php my_script.php my_args &", "w");

In Linux, the server was thrown into a state of confusion, perpetually executing, then aborting the requested shell script command (infinitely starting and exiting the requested php script in the popen command). Obviously explicitly calling the php interpreter from within an executing php process is a particularly nasty thing to do.

I'm assuming this sort of thing doesn't happen in windows as shell script executes in its own command window (which when using proc_open with get_proc_status makes getting a correct PID value problematic). Anyone with futher insight - feel free to leave comments.

To get around this issue, I ensured that in the linux environment, the shebang path to the interpreter appeared on the first line of the executing script, and it was chmod'ed to allow direct cli execution... ie

my_script.php:
#!/usr/local/bin/php

//
// my code
//
?>

CHMOD command used:
chmod 755 my_script.php

Then finally the linux popen command:
popen("./my_script.php my_args &", "w");

The default working directory of popen, and all other shell script execution methods is always web root (so my_script.php sat in the web root dir).

The same issue occurs with all other shell script execution methods:
system, exec, proc_open, passthru, back ticks (`) etc.

Sunday, June 27, 2010

CodeIgniter - Ruby on Rails (RoR) Layouts and filters

DOWNLOAD THE CODEIGNITER DOCTRINE STARTER (~8Mb)

Continuing to list starter features, I have also integrated Ruby on Rails (RoR) style layouts and before / after filters.

Both were implemented as CI hooks - both the layout and filter code was obtained from the CodeIgniter forums and wiki posts

Using the Filters hook, each action is wrapped in a Doctrine Transaction, which ensures all db updates etc. are ATOMIC.

One hassle I found with the above Filters system was that I couldn't implement a before / after filter directly in the controller class. Within the Doctrine_Transaction filter I ensure that if a controller has a before_action, or after_action function within its class definition, it is called before, or after the action is executed. This obviously could / should be abstracted out eventually into its own Filter class.... but it works just the same as is.

Tuesday, June 08, 2010

CodeIgniter - Doctrine unit testing - starter features




So to expand on the previous post... specifically to describe a few of the extra bells and whistles in the project starter....


Unit Testing
I have created a (very) simple extension to the codeigniter unit testing as I wanted to have a simple, web based interface to unit tests. I found the whole CodeIgniter terminology used in unit testing a little confusing (coming from a NUnit / JUnit testing background). Unless I missed something in the doc (which is very possible), CodeIgniter supports test assertions ($controller->unit->run when the test helper is loaded), there was no way to group a number of assertions together into a logical unit of testing.

I'm going to use a little NUnit / JUnit terminology to describe unit testing via the web page interface in the starter. In the ApplicationCode/libraries folder I have a base_test_controller which when extended replicates basic TestFixture functionality. To create a Unit Test within the fixture, simply create a function prefixed with the name test_. The following demonstrates using a controller as a test fixture:



//
// My Test Fixture
//
class MyTest extends base_test_controller {
//
// This method is executed before any test methods are run
//
function fixture_setup(){}
//
// This method is executed after all test methods are run
//
function fixture_tear_down(){}
//
// This method is executed before each test method is run
//
function unit_setup(){}
//
// This method is executed after each test method is run
//
function unit_tear_down(){}

//
// Your Unit Test
//
function test_My_unit_test()
{
$this->test_description = "This is a description of my unit test";

$this->unit->run(true, true, "Assertion one has passed");
$this->unit->run(true, true, "Assertion two has passed");
}
}




Saturday, June 05, 2010

PHP, CodeIgniter, Doctrine and ExtJS


So, its been ages since I have posted anything on this particular blog, so time for an update.
-->


Although I still dabble with MS technologies, I find the bulk of my development has moved to MVC geared frameworks (ideally rails based) - which (I find) works best within the contexts of dynamic languages..... so I have been using platforms like Ruby, JRuby, PHP, even ColdFusion for the last few projects. My tech stack of choice is:

ExtJS : client side JS library
JRuby : presentation tier
Java + Hibernate : domain + data layer

In the latest project however I have been developing a survey site in PHP. I thought that to save anyone else time, I have created a 'project starter' development stub which incorporates:
  • ExtJS 3.2.0: client side JS library,
  • CodeIgniter 1.7.2: presentation + domain tier,
  • Doctrine 1.2: ORM data layer framework

The idea being that you can simply extract and develop, without wasting time integrating all the techologies so that they work together....

Some features of this project starter are:
  • CodeIgniter Models have been replaced with Doctrine Records
  • Doctrine is loaded into CI as a plugin
  • RoR type before and after filters....
  • Doctrine transactions automatically wrapped around every action at execution time (ATOMIC db updates)
  • Basic Role based security (I think Redux may be in there as well?)
Simply extract, hook up the database.php config file and viola.... You can start coding your layouts, views and models.

Licensing: refer to all integrated frameworks licensing agreements for details (i.e. Doctrine, ExtJS, CodeIgniter). Original licensing agreements are bundled in the Project starter for your reference. The Project starter is a freely distributed, open source integration of technologies. Just make sure that you are using each framework in adherence to their respective licensing terms. As far as I am aware all technologies can be freely used under GPL.... 

Wednesday, December 06, 2006

Reimaging a RAID array from ATA using Windows XP

Recently I found myself in the awkward position of needing to replace my laptop's ATA Hard Drive with a RAID array (RAID 0 - Striping). Generally this should be a happy time due to the HDD performance boost you could expect from a striped array. Unfortunately - I could not afford to loose too much time installing the drive and setting up the necessary software environment.



The obvious solution to this is to image the new RAID array using the old drive's contents and generally, this would be a relatively painless proceedure. RAID tends to complicate matters however, as windows requires RAID drivers at installation. Using an installation of XP from an image taken from a non RAID'ed drive results unpredicatable results (freezes, BSOD etc.) when imaged to a RAID'ed one. In addition, not all imaging software supports RAID imaging. Norton Ghost 2003 and earlier do not support RAID..... I used a trial version of Acronis True Image which worked like a charm.



To get Windows working, after applying the image to the newly installed RAID array, you can boot from the Windows CD and repair the imaged windows installation. Other drivers such as video drivers should probably be installed - I found my nVidia GeForce 6800 prevented me from booting into Windows. I could load Windows Safe mode with network support however.



Below is a list of the steps I took to install RAID and restore my previous software environment:




PREPARATION
  1. Installed Acronis True Image
  2. Ran a backup to an external USB HDD
  3. Created a bootable CD in Acronis
RAID INSTALLATION
  1. Copied RAID Drivers to floppy disk
  2. Booted into BIOS to check boot order (floppy, CD, hdd etc.)
  3. Cracked open the laptop and removed the old hdd.
  4. Set the jumpers on the new Slave hdd to cable select (as per specs for Alienware laptops)
  5. Installed 2 new hdds (of equal size, speed etc)
  6. Booted into BIOS and set the HDD Mode to RAID
  7. Save and Exit
  8. Boot into RAID BIOS (I used FastTrack.... CTRL-F at bootup)
  9. Include both the installed hdds to a Striped (RAID 0) array
  10. Save and Exit
  11. Verify on bootup display that the RAID array can be detected
REIMAGING
  1. Boot off the Acronis Bootable CD (PREPARATION - step 3)
  2. Follow the prompts to reinstate the image stored on the USB HDD - resizing the target partition size if necessary.
  3. Boot off the Windows CD leaving the RAID Drivers in the floppy drive (RAID INSTALLATION Step 1)
  4. Press F6 when prompted to install third party SCSI or RAID Driver
  5. Select the driver from the options given
  6. Proceed through Windows installation until asked if you wish to repair an existing installation of Windows
    -NOTE - Do not choose the Recover a Windows installation option at the start of the Windows installation process.
  7. Select Yes and select the installation of Windows you wish to repair
  8. Continue reinstalling Windows.
  9. Boot into Windows safe mode.
  10. Reinstall motherboard chipset, video and sound drivers.
  11. Boot into Windows and everyone's happy.

Thursday, May 18, 2006

Asp.net 2.0 Profiling - Auto Saving Form Data

-->

Microsoft has extended the standard web state persistence models with the addition of profiling in the ASP.NET 2.0 framework. Essentially using profiling is identical to using the HttpSession, the difference is that user data is persisted between site visits. By using the Profile, a website is able to save and retrieve persisted user data regardless of the user's browser / session / machine state. We are now able to persist user specific data (i.e. session information) between site visits without needing to write a data abstraction layer (db interface).

In other words, we can save session related information between sessions. Ignoring all the extra ASP.NET 2.0 Framework bells and whistles, an obvious use for Profiling would be found in UI design; saving and repopulating the entered data of Form input controls (i.e. Textbox) between a user's visits. One use for this sort of functionality would be remembering a User’s previous search in search pages. Another use would be to temporarily save a User’s progress through larger form based work processes (i.e. completing an online survey). This way, should the User’s Session be interrupted (i.e. browser crash), the User can continue data entry from where they last left off the next time they log onto your Site. User form data is automatically saved on postback without the need for special persistence code.


The example code (github : https://github.com/benkitzelman/ProfilePersistence) demonstrates how to construct a custom Profile responsible for transparently saving and restoring registered control values on Page loading.


By looking at PROFILE.CS we can see that it inherits System.Web.Profile.ProfileBase. It contains a public method ‘Register’ which takes a System.Web.UI.Control as one of its parameters. Register is responsible for persisting a passed control’s property to the profile, and registering it for repopulation on the next postback. The second parameter is a string defining a property name with which to bind to (i.e. when registering a TextBox, we could bind the “Text” property). Using the given control’s Page property, event handlers are assigned to the parent page’s Pre Render and Pre Loading events. Note that within the PreLoading event handler, profiled data is only loaded into the target control on the first load (Page.IsPostBack == false) preventing any overwriting of entered data between postbacks (ViewState).


The Profile manages a collection of UrlProfiles. Essentially a UrlProfile is a Serializable object (data container) responsible for managing profile data for a particular url. It is also responsible for managing the profiled data for registered controls within its Target Page.


The UrlProfile was implemented in preference of a Page Profile as quite often a page functionality can be quite varied depending on its Url (address + query string parameters). Each registered control has a ControlProfile constructed.


Ideally the reading / populating of target controls should be abstracted, requiring as little handling code as possible. Looking at PROFILETEST.ASPX, we can see that only one line of code at initialization is required to register a control for Profile persistence.



protected override void OnInit(EventArgs e)
{
base.OnInit(e);
Profile.Register(testBox, "Text");
}
The only thing remaining is to hook up the custom profile in the website's web.config specifying a default profile provider (see Custom Persistence Layer below):

Custom Persistence Layer
By using a provider model, how this data is persisted is highly configurable. The standard (implicit) provider is the SQLProfileProvider (all data is serialized and stored to db). One can however, inherit the System.Web.Profile.ProfileProvider base class and assume control of the data persistance In the example code, the TEXTFILEPROFILEPROVIDER.CS serializes User data to a text file. This implementation is based heavily on the MSDN Example.

Saturday, March 04, 2006

DTMF Sampling - Constructing a Wave

-->


How the heck do you generate dynamic sample data suitable for the wave format (PCM)? It's actually not that bad. The key is to follow the wave format specification



http://replaygain.hydrogenaudio.org/file_format_wav.html
http://ccrma.stanford.edu/courses/422/projects/WaveFormat/

http://www.sonicspot.com/guide/wavefiles.html

CONSTRUCTING THE WAVE
Code for this blog is posted at Github here

So we see that the first 44 bytes of a wave file is dedicated to wave format / header information, from byte 45 onwards contains all the sample data - the bits which make the noise.

In my Wave class - the constructor allows setting of basic Wave settings (sample rate, Resolution [8 or 16 bit], Channel [left, right, mono, left-right stereo]) and creates a byte array 44 cells in size populating it with intial header values. All this code is pretty much standard, not really worth explaining as much of it can be a copy paste job. The main thing is to follow the wave format spec given in the above links.

The sampling code is a bit more interesting as a bit of maths is involved. Essentially DTMF requires the summation of Sine waves of two frequencies to generate a tone recognised by a phone exchange (DUAL TONE multi Frequency). Standard Frequencies for each digit on the phone dial pad (0-9 * # a b c d) can be found at :

http://users.tkk.fi/~then/mytexts/dtmf_generation.html

As far as I am aware, DTMF frequencies are international standards and so the posted frequencies should work with phone exchanges world wide.

In my solution I created a basic data containing class called SineWave. In it's constructor, the frequency (Hz) is given as an int, along with its left and right amplitude (volume) at which the frequency should be should be sampled.

Ok, so looking at the Wave class we have a static method ConstructWave (internal method), which in addition to encoding properties takes an array of SineWaves (the frequencies to be summed) and a TimeSpan (how long the resulting sample should be played).

Say for instance we wanted to generate a tone for the digit '1', using the frequencies specified at http://users.tkk.fi/~then/mytexts/dtmf_generation.html we can construct 2 SineWaves, and pass them to ConstructWave :



//
// playing frequencies at full volume
//
SineWave[] sineWaves = new SineWave[2];
sineWaves[0] = new SineWave(1209, 1, 1);
sineWaves[1] = new SineWave(697, 1, 1);

//
// generate an 8 bit 16kHz sample in mono
//
Wave digitOne = Wave.ConstructWave(sineWaves, 16000, Resolution.EightBit, AudioMode.Mono, TimeSpan.FromMilliseconds(250));


THE IMPLEMENTATION OF SAMPLING

On closer inspection of the ConstructWave method in the Wave class we can see that all sampling is contained in the AppendSample method. Using the target sample rate and sample duration (provided in the Wave constructor) its relatively easy to determine how many bytes the wave sample data should be (Data Size):

i.e.

sample data Byte count = (Sample Rate (Hz) / duration (in seconds)) * no. bytes per sample

WHERE
no. of bytes per sample = (resolution / 8) * no.of channels
IF Mono : no. of channels = 1
ELSE no. of channels = 2



According to the wave header format (above) the total sample data byte count (or data size) should be assigned to bytes 40 - 43. Helper methods ExtractByte and ExtractInt have been written in the Wave class to extract each byte in a 4 byte int (int 32) via bit masking. The Frame Size should also be set in bytes 4 - 7 :


Frame Size = DataSize + 36
[NOTE: 36 is the number of remaining bytes in wave header passed the Frame Size record]


Ok, now to calculate the sample byte data itself..... Both frequencies should be assigned a constant, which in code I have called dataSlice:


dataSlice = (2 * PI) / (waveTime / sampleTime);
[NOTE: waveTime = 1 / frequency (Hz)
sampleTime = 1 / sample rate (Hz)]


Using the number of samples as the loop invariant ( Sample Rate (Hz) / duration (in seconds)) we can calculate each fragment of sample data (bytes 44 - end of array) for the left and right channel (or just the left if mono) as follows:



dataLeft = (Math.Sin (i * FrequencyOneDataSlice) * LeftAmplitude) + (Math.Sin (i * FrequencyTwoDataSlice) * LeftAmplitude) ;

dataRight = (Math.Sin (i * FrequencyOneDataSlice) * RightAmplitude) + (Math.Sin (i * FrequencyTwoDataSlice) * RightAmplitude) ;

WHERE
LeftAmplitude = relative volume of left channel (must be <= 0.5) RightAmplitude = relative volume of right channel (must be <= 0.5) i = current loop iteration



Finally, we mask the resulting number using the resolution of the wave we are sampling (8 / 16 / 24 bit) and store it in the underlying byte array. If you are storing multi channel data (Left, Right, LeftRight Stereo), each data byte should be interleaved....



i.e. 8-bit LeftRight Stereo:

...

waveBytes[n] = ExtractFirstByte(dataLeft)

waveBytes[n+1] = ExtractFirstByte(dataRight)

...


16-bit Stereo:

...

waveBytes[n] = ExtractFirstByte(dataLeft)

waveBytes[n+1] = ExtractSecondByte(dataLeft)

waveBytes[n+2] = ExtractFirstByte(dataRight)

waveBytes[n+3] = ExtractSecondByte(dataRight)

...


And that's it! As the iteration continues, the byte array is filled with Dual Tone byte samples until the target sample size has been reached ( Sample Rate (Hz) / duration (in seconds)).


PLAYING DIRECTLY TO THE SOUNDCARD


Using Pinvoke, we can access winmm.dll - a windows resource to play or save the generated wave as follows:


...

//External method declaration
[DllImport("winmm.dll", SetLastError = true)]
static extern bool PlaySound( IntPtr pszSound, System.UIntPtr hmod, uint fdwSound );

// calling the declared external method
IntPtr ptr = Marshal.UnsafeAddrOfPinnedArrayElement(this.m_waveBytes, 0);
PlaySound(ptr, UIntPtr.Zero, (uint) SoundFlags.SND_MEMORY);
...


References : http://209.171.52.99/audio/concatwavefiles.asp


Wednesday, March 01, 2006

Mp3's & Wave - Constructing DTMF audio files in C#

TERMS:

DTMF: Dual Tone Multi-Frequency - the beeps sent by a phone to the exchange when the User enters a phone number.


THE PROBLEM:

1 - Constructing a wave using a common format (PCM)
2 - Constructing / sampling each digit's tone using standard frequencies
3 - Convert a phone number to a wave
4 - Setting channel (left, right, mono, left - Right stereo) and sampling settings
5 - Integrating unmanaged code into a managed app (using winmm.dll and the Lame encoder)
6 - Encoding the generated wave as an Mp3

** NOTE: This process has been patented as MP3 Telephony by HCV WirelessTM

THE PLATFORM:

C# (easily transferrable to other syntax though) using the Lame Mp3 encoder


BACKGROUND:

I recently had to create a basic DTMF converter for a client using managed code. It's initial inception would be an application with the view that it would eventually be ported over to a website to be used as a service.

Esentially, all the app needed to do was take a phone number string and encode it into its DTMF representation as an Mp3 file. This meant that first I would have to create the sample as a Wave before ripping it as an Mp3 using the Lame encoder. An added feature was to be able to set which channel the DTMF tones would be generated for - Left, Right, Mono, or Left - Right stereo.

Over the next few blog entries I will tackle each segment of the problem.