Why I am a little annoyed with StackOverflow currently

Ok, so as the title goes, I am a little annoyed with StackOverflow currently and my participation in the site has dropped to almost none.

As a bit of background I have been active on SO (StackOverflow) for about 2 years, this problem has only been occuring for about the last 4 months with the arrival of another user.

Before you get any ideas, no, I am not going to rant about the failures of the site, save for one. The reason for my retreat from SO is because of one user, who when he speaks, uses the most foul language at this disposal save for actual swear words.

Some examples of what words he has used (to describe me and other users, including moderators):

- Clown
- Fool
- Idiot
- Whining Bitch
- Foolish
- FUD
- Probably more but I have forgotten

Along with using his reputation as a weapon to insult me and attempt to get upvotes over me (WTF?). I mean this is the sort of user (he did do this by the way) that turns around and says he is authorative because of how many years he has been grinding the same knowledge is ascertained in University.

Even though it is quite fun to see him rage out from start to finish at first it does quickly become tiresome and now I barely even bother to go on SO due to the abuse. The user in question uses this to his advantage to weaponise his reputation gain over me even further.

This does come down to the bad apple theory ( http://blog.codinghorror.com/the-bad-apple-group-poison/ ) laid out by Jeff some while back whereby this single user is starting to poison the experience for others. It is not only me who is being affected and this user has attacked other users as well.

Now I am not perfect, I must admit that I attempted to defend myself and on one question where he added verbal abuse in the comments of my answer I went upto his own and pointed out a flaw in his answer. However, unlike this user I have not resorted to verbal or hateful abuse of others, nor have I lost my temper and neither have I ever placed comments that were not at least helpful to the OP. The comments I placed merely pointed out things in the answer, which as I say to this user:

“You are perfectly free to say my answer is wrong and discuss it with me but have some manners. Do not abuse other users.”

He has recently purposefully sought out my answers, downvotes, then plants some verbal abuse on it using 3 or more of the words in the at the top of this page. He will also make a reference to “why are you posting in this space, your so useless”.

Now I must admit that on one said answer of this other users I did get carried away and wrote like 30 comments, however, the OP agreed with me on my comments and found them particularly helpful. I did not devolve into verbal abuse or hate, I merely used my knowledge of said technology to actually provide holes in the answer. I did not have the knowledge available to me to answer the question as such I was interested to know the answer too and I got carried away with wanting to know. So much so that I used to own knowledge as a benchmark to test out other peoples answers.

This however does come down to a question, if I am able to provide constructive and meaningful responses and the user is unwilling to engage to respond and answer (to which the OP agrees with my comments) then why is that user answering that question? You would imagine that an answer should be water tight, if there are problems in its consistency and building wouldn’t you seek to fix them? Otherwise you are lieing about your knowledge of the subject are you not?

Despite that I did get almost suspended for that activity and it was used by the moderator this time around as a means to justify suspending me against the other users verbal abuse.

I have been suspended and warned the same and this comes down to the problem, a problem which will make me look self righteous.

I have been able to back the moderator in question into a corner whereby the moderator just refuses to answer my last question because they are ineffective to fix the problem. Their last reponse to me (a thread which started out blaming me) was:

My suggestion would be if other user is rude to you, then flag either the comment or the post for a moderator to review instead of engaging them. There is no point in engaging them at that point, that is when a moderator should step in to handle it.

To which I replied:

As I said I have been here before, it goes no-where.

It is all well and good saying to do that but it doesn’t work, he carries on saying that the moderators agree with him and then verbally abusing moderators (look at the responses on my linked question for the PHP).

That is why I reply, that seems to be the only thing that makes him stop after a while. If I defend myself enough from him he eventually gets tired and moves on.

This reply has left the moderator ineffective to solve the problem, the said user will carry on his antics after suspension abusing me and other users without caring for manners. If I try and do anything about it from now on I will also be banned (deleted) from the site.

Effectively this one user has ruined my entire experience of SO and I have no doubt he is ruining other peoples too with the amount that he attacks other users.

This is what happened last time and even contacting SO themselves results in this response. The fact that the moderators and SO themselves cannot come up with a better one has made me tired of dealing with the abuse I recieve from this one user, as such I no longer visit SO and will most likely be looking for another site to replace it soon.

Peer pressure of FIG

Fig standards and its assimilation into the PHP community is helping to make truly standardized coding and any PHP developer will tell you that it is long overdue, especially when even in one core PHP class you are using a mix of camel case with underscored methods.

But there is a worrying trend occurring. People are starting to go against how they like to code and what they find easy to read and use and instead of being consistent within their coding are now being effectively bullied into coding a completely different way. Maybe it is to not look like the odd one out on composer or the fact that their standard is not catered for because people who prefer to use camel case are the ones calling the shots, at the end of the day it doesn’t matter, what does matter is that instead of creating some standardized coding it is making people code a way they don’t like.

I am all up for ensuring that code standardization exists within a module but when you get two modules from two different authors you have to expect that whereas one author may like camel case the other finds it hard and cumbersome to work with.

Yes camel case works great for a huge number of times, yes camel case is shorter but then not all PHP functions are true English, some are anagrams of their longer selves. This means deciding where capitals are and should be can get harder when coding that function into usage in a file outside of the one it is declared in.

This is not a call to arms for the fight against camel case but more that people should embrace their own idea of what readable is but at the same time imply a standardization across their coding that echos throughout every file. This way once a user has learned how you program they can safely say the rest of your extension/library/whatever is coded the same way.

The same could be said for the spacing and new lines on methods, I personally dislike adding the opening bracket of a method on a new line, it makes things longer and harder to read due to length. In fact I would say doing this also disconnects the functions content from the function signature making it harder to associate that routine to the function in question.

And I understand that single spaces help in standardization but do we really need to add spaces around the condition of the statement like so:

if (0 === 0) {

?

All of these micro standardizations is actually making our code more complex and harder to read and maintain. Imagine an IDE that didn’t do that if statement spacing for you (Eclipse PDT for one). You now have to do that spacing yourself every fucking time. Imagine having to press enter twice the amount of times to define methods.

Many people say that micro-optimization is the killer of any good application, I would say that micro-standardization is the killer of any fun and exciting programming experience and creates nothing more than micro-stress.

Yes I understand that standardized spacing helps diffing on GitHub etc but seriously your really going to have to a problem with:

if(0 === 0){

compared to:

if (0 === 0) {

when diffing? No of course not. I am an avid user of GitHub and host numerous projects there but I wouldn’t care about such tiny spacing like that. It is the indentation spacing you care about, not tiny little things like that.

I hope the PHP community can put some of PSRs short comings behind them and allow people to just have fun while keeping standardization within their application and providing a basic guideline instead of the 10 commandments which seem to be bullying developers into writing code different to what is right for them.

Things I have learnt in the first 5 minutes of using Elastic Search

After hearing all the raving about Elastic Search and how it was awesome and “rad” or whatever “hip-young” programmers want to say I decided I would give it a go.

To get the point since this might be a bit tl;dr: I am not overly fond of it. I am unsure what companies like GitHub see in it.

It has a queue, no need for a river

Excactly that, implement it into your active record and you don’t need to river.

I would in fact advise agains the river, it ues the oplog which can be slow and not only that but you are adding yet another lock on top of the secondaries that are trying to read as well, which may increase your chances of replication falling behind, this is of course dependant upon how often the river pings your oplog and how many new ops you have in that window.

This is a good point.

It has terrible documentation

Its documentation is great at explaining the API, no doubt about it but if you want to actually find out how something works and why something is then you have to constantly ask StackOverflow.

It just describes what parameters to put in and then leaves the rest upto you thinking that you don’t want to bother yourself with those details. We do though, we are not bandwagoning your product, we want to know how sharding and replication works, how indexes work and how to manage the product and more.

Even when looking at the API the documentation can sometimes be…unhelpful. Mainly due to its huge font-size, yet tiny middle centered layout, English language problems and disorganisation.

Overall I came out less than impressed about Elastic Searches documentation.

I actually Google search everything first so I don’t have to navigate that mess.

Lucene is not a good base

Yes Lucene is one of the originals when it comes to FTS techs but this isn’t a good thing. It was made back when people didn’t care about speed or scalability and all they cared about was that Lucene could Google.

This does notably mean that Lucene has many problems, like no infixing which translate to Elastic Search. This means at times to get the effect you want you must use prefix or wildcard searches which are so slow they are pretty much a death knell for any database, especially one which serves an FTS tech.

That is just one of the many problems that plagues modern Lucene, including a mix and match querying language from, the years of being backward compatible yet trying to keep up with changes.

It has some of the most verbose querying the universe

When querinyg a single keyword takes this much writing:

	    $cursor = glue::elasticSearch()->search(array('type' => 'help', 'body' =>
	        array('query' => array('filtered' => array(
	            'query' => array(
	                'bool' => array(
	                    'should' => array(
	                        array('multi_match' => array(
	                            'query' => glue::http()->param('query',$keywords), 
	                            'fields' => array('title', 'blurb', 'tags', 'normalisedTitle', 'path')
	                        )),
	                    )
	                )
	            )
	        )))
	    ));

And querying more than one takes:

        $res = glue::elasticSearch()->search(
                array('body' => array(
                        'query' => array(
                                'filtered' => array(
                                        'query' => array(
                                                'bool' => array(
                                                        'should' => array(
                                                                 
                                                                array('prefix' => array('username' => 'the')),

                                                                array('prefix' => array('username' => 'n')),
                                                                array('prefix' => array('username' => 'm')),
                                                                array('match' => array('about' => 'the')),
                                                        )
                                                )
                                        ),
                                        'filter' => array(
                                                'and' => array(
                                                        array('range' => array('created' => array('gte' => date('c',time()-3600), 'lte' => date('c',time()+3600))))
                                                )
                                        ),
                                        'sort' => array()
                                )
                        )
                )));

You do start to feel yourself slipping away.

You must do your own tokenizing if you wish to prefix on two keywords separately

Elastic Search won’t do this, it will actually search for a phrase by default even when you don’t use the phrase searcher.

It has some of the most complex querying the universe

When you have many ways to represent the same operator and 6 different operators for what could essentially be the same thing…

I believe this is legacy from Lucene, one of its many downsides for being old and backward compatible with every version.

It has no exact filtering without turning off the analyzer

Yep you read that right, you want to filter (yep filter not query) on deleted? Your gonna ahve to make sure it isn’t analyzed buddy.

It has no easy way to define indexes server side

I have no idea why elastic search does this but they make you define the indexes client side. In all my life I have never had a reason to do that and if you want to comment saying you do think about it carefully, “DO YOU REALLY???”.

Only about 1% of cases need client side index definition and the other 99% think they do.

Either way I am now stuck with having to have a elastic search setup script in my application about 500 lines long which has to be run in patches since you can’t run the delete index command and then the recreate index command, something about it being done sync and async.

It says to not use delete yet provides no feasible alternative

Its exact words are:

Note, delete by query bypasses versioning support. Also, it is not recommended to delete “large chunks of the data in an index”, many times, it’s better to simply reindex into a new index.

http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-delete-by-query.html

So if you need to delete a users videos from the videos type index then think again and since you have just shy of 200m records you can’t simply reindex.

Creating mappings is painful

No default mapping ability which means similarities between types are duplicated, 3/4 types have duplicated mappings.

EDIT: There is a default mapping is just their documentationis so terrible it was hidden under “dynamic mapping”: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-dynamic-mapping.html

It is actually quite slow

Sphinx did it in about 1ms or less elastic search gets no results in 6ms.

Its querying is not very well standardised to other techs

from = skip/offset
size = limit

Seeing the problem yet?

Its PHP API is larger than my application in both file count and size

It is 5.42MB in size!!! To compare Yii is 58MB. That may not sound a lot but Yii is a full stack web framework…yep.

Its PHP API only takes composer

NOT EVERYONE USES COMPOSER!!!

The PHP API uses 9MB (base) per request

Under Sphinx that was 3.25MB

Decent AWS usage is an optional extra

Welcome to hell kid: http://www.elasticsearch.org/tutorials/elasticsearch-on-ec2/

Setting up Elastic Search properly was tedious and soul breaking

Lack of documentation and constantly going backwards and forwards from SO made me look about 10 years older.

Elastic Search returns no results if no keywords are provided

Super annoying…

Schema flexible but not

I recently made a mistake in my documents which mean I needed to resave an object as string, of course, Lucene is not schema flexible which means, despite trying to be, Elastic is not either.

So what do I do now? Re-update the index through a specially designed script (WTF why can’t I do this shit in the fucking config???) and then reapply all documents? Or delete the index and reapply all types and mappings again and reinsert all indexes again…

No easy document management

It is like providing MongoDB or MySQL without a console or shell.

No easy way to reindex

Nope.

Keeping the config file safe

Having the configuration done client side means that you must, as I said, have a php or whatever file in your app strucutre which can run the config.

This immediately poses a problem. How do you keep this at hand in a browser runable location without making specific sever rules etc? You can’t…

You have to turn your setup file into a full blown console file with the works that SHOULD BE IN ELASTIC SEARCH.

The diagnostics are terrible

I recently started getting red status all the time so I, as suggested by the user group, looked into the logs only to find:

[2013-12-22 12:44:24,257][INFO ][node                     ] [Dominic Fortune] version[0.90.8], pid[1494], build[909b037/2013-12-18T16:08:16Z]
[2013-12-22 12:44:24,257][INFO ][node                     ] [Dominic Fortune] initializing ...
[2013-12-22 12:44:24,265][INFO ][plugins                  ] [Dominic Fortune] loaded [], sites [HQ, head]
[2013-12-22 12:44:26,818][INFO ][node                     ] [Dominic Fortune] initialized
[2013-12-22 12:44:26,818][INFO ][node                     ] [Dominic Fortune] starting ...
[2013-12-22 12:44:26,891][INFO ][transport                ] [Dominic Fortune] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.183.128:9300]}
[2013-12-22 12:44:29,919][INFO ][cluster.service          ] [Dominic Fortune] new_master [Dominic Fortune][5ddMpkRQTZa3TqQ-ljUabg][inet[/192.168.183.128:9300]], reason: zen-disco-join (elected_as_master)
[2013-12-22 12:44:29,951][INFO ][discovery                ] [Dominic Fortune] elasticsearch/5ddMpkRQTZa3TqQ-ljUabg
[2013-12-22 12:44:29,979][INFO ][http                     ] [Dominic Fortune] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.183.128:9200]}
[2013-12-22 12:44:29,980][INFO ][node                     ] [Dominic Fortune] started
[2013-12-22 12:44:29,987][INFO ][gateway                  ] [Dominic Fortune] recovered [0] indices into cluster_state
[2013-12-22 12:45:07,323][INFO ][cluster.metadata         ] [Dominic Fortune] [main] creating index, cause [api], shards [5]/[2], mappings []
[2013-12-22 12:45:17,669][INFO ][cluster.metadata         ] [Dominic Fortune] [main] create_mapping [_default_]
[2013-12-22 12:45:17,680][INFO ][cluster.metadata         ] [Dominic Fortune] [main] create_mapping [help]
[2013-12-22 12:47:19,818][INFO ][node                     ] [Dominic Fortune] stopping ...
[2013-12-22 12:47:19,845][INFO ][node                     ] [Dominic Fortune] stopped
[2013-12-22 12:47:19,845][INFO ][node                     ] [Dominic Fortune] closing ...
[2013-12-22 12:47:19,856][INFO ][node                     ] [Dominic Fortune] closed
[2013-12-22 12:47:45,495][INFO ][node                     ] [Stryker, William] version[0.90.8], pid[1695], build[909b037/2013-12-18T16:08:16Z]
[2013-12-22 12:47:45,496][INFO ][node                     ] [Stryker, William] initializing ...
[2013-12-22 12:47:45,502][INFO ][plugins                  ] [Stryker, William] loaded [], sites [HQ, head]
[2013-12-22 12:47:48,068][INFO ][node                     ] [Stryker, William] initialized
[2013-12-22 12:47:48,068][INFO ][node                     ] [Stryker, William] starting ...
[2013-12-22 12:47:48,140][INFO ][transport                ] [Stryker, William] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/192.168.183.128:9300]}
[2013-12-22 12:47:51,170][INFO ][cluster.service          ] [Stryker, William] new_master [Stryker, William][rMklMXasRDS4lURA0wQ7lQ][inet[/192.168.183.128:9300]], reason: zen-disco-join (elected_as_master)
[2013-12-22 12:47:51,198][INFO ][discovery                ] [Stryker, William] elasticsearch/rMklMXasRDS4lURA0wQ7lQ
[2013-12-22 12:47:51,222][INFO ][http                     ] [Stryker, William] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/192.168.183.128:9200]}
[2013-12-22 12:47:51,223][INFO ][node                     ] [Stryker, William] started
[2013-12-22 12:47:51,242][INFO ][gateway                  ] [Stryker, William] recovered [1] indices into cluster_state

…nothing. Their diagnoistics are terribad to say the least. Currently their advice is to remove the data directory and lose ALL YOUR DATA IN THE PROCESS.

That brings me onto the next point:

No real journaling

No easy recovery if things go terribly wrong.

restarting a node always results in red status (still not found solution)

I cannot seem to restart a node via a kill command or restarting the OS without being left with red status and having to delete the data directory and lose all my data.

Elastic Search works by quorum (if you don’t know what democracy is try searching Wikipedia if you can…)

The last point was my own stupidity, remember that elastic search works via Quorum.

That is why if you set 5 shards and 2 replicas with no data you will always get red status on restart…

Auto join sharding is a really bad idea

Great in theory but what if you accidently start two nodes on the same machine because you were tired or something? OH NOES I JUST LOST MY ENTIRE CONFIG!!! When you restart that node without the other extra one on it you are left with nothing short of a disaster and you have no indication on how to fix it.

I have now had to delete all my data three times due to noob mistakes making learning Elastic Search ironically more difficult than if I had to learn sharding commands separately.

I heard once MongoDB was going to do something similar, please for the love of God don’t.

Comma Separated List of Values in NetSuite Saved Searches

I had a problem recently where I needed to get out the values from a multi-valued field in a NetSuite saved search as a comma deliminated string field.

Let’s take an example; imagine I have a field on a customer record called “Interested Subjects” and it contains a selection of marketing subjects which interest the user. The pont of this post is to be able to get those interested subjects out in a single string which is comma deliminated.

Normally you can do this with stragg or listagg in pl/SQL however, NetSuite does not actually allow those operators. So what can we do instead? Well it turns out NetSuites SQL API does stragg and listagg by default when you group.

To get this working simply group on a unqiue identifier per customer, I chose internal ID, and select “maximum” from the grouping options for your multi-valued field, in my case “Interested Subjects”. The result will be something along the lines of:

ID | Maximum of Interested Subjects
1  | Wildlife,Birding,Natural History

This is such a common problem I am sure this will help others.

Sphinx API abstract Layer

Follwing on from a previous post on the subject I have rewritten my original Sphinx API wrapper to be something better and easier to use this time.

The main change that I got rid of the need to encapsulate it all within MongoDB like syntax, take a look at the repos readme it will explain everything.

MailChimp vs AWeber: WTF?

I have, for the company I work for, been looking into third party email providers.

Initially I went, by default, onto MailChimp but there are problems.

Email Templates

Their drag and drop editor is less than easy and intuative to create amazing templates for emails. For example, imagine you selected the 1 column layout and at the bottom you wanted to add a 3 column list of products for upsell within the email. Ditto! You cannot without changing your email template, and even though MailChimp will attempt to convert your old template to the new template there will still be problems.

You can, to contradict what I just said, split a text block into two columns however, to add complications, you can only split it onto two columns and you cannot drop and drag new template items ontop.

I also tried making my own template for the drag and drop editor that would mean I could allow the marketing department to use the base company template and just drag and drop items on to their hearts content. Nope! That is not possible, once you create your own template it reverts back to the HTML editor. There is no way to use that newly made template with an easy WYSIWYG editor.

Overall their email making abilities were less than satisfactory.

Delivery Rates

Due to having free accounts (which is awesome) they do have one downside. Occasionally spammer will use their network to send malicious emails out and even though ISPs know that MailChimp are okay they do not trust the users on their network and if you happen to send out a campaign when one of these malicious senders is doing their business then you will see a lot of undeliveries.

Developer Attitude

I had a nasty run in when I decided to suggest to MailChimp to upgrade their PHP API wrapper, put it in a easy to share location and allow pull requests for people who wish to improve it.

Basically their reposne was: “get lost”.

To be more precise I suggested:

  • They update their API to use 5.x code instead of PHP 4 code, more specifically stop using functionality constructors and var declarations for class varaibles
  • Potientially use namespaces
  • Upload the API wrapper to Github
  • Upload the API wrapper to composer

Most of these points were met with a blunt and blank response. I found that on any topic not directly related to asking for a saving hand that MailChimp were less than happy to listen.

No Sharing Attitude and Bad Legal Representation

Recently a relatively innocent open source guy decided he would make PSDs of poopular and good looking interfaces ( Here for reference ). He made one of:

  • MailChimp
  • Facebook
  • Apple Bento
  • Disqus
  • And more

Mailchimp decided they didn’t like this innocent hobby of creating look alikes and threatened him with law suite if he didn’t take the PSD of their interface down.

Consequently he had to…

That being said other companies have said they are more than willing to have their PSD uploaded.

Conclusion

So the end conclusion of MailChimp was that I was begging to go somewhere else, not because their service sucked (it didn’t) but because their attitude sucked; behind all that fancy monkey talk and friendly customer service there was the blackened beating heart of a selfish corporation who wanted to ensure that their way was the only right way and anyone who questioned was put down.

AWeber

After being turned down by SendGrids lack of segmentation I decided to look at AWeber after a long list of endorsement:

To name a few…

Sign up

The first thing that greeted me was having to pay to signup, which I didn’t like.

I would have preferred the notion of paying to send Email messages to a list but being able to test their actual interface for free. This meant I had to pay for something I didn’t know the look, feel or usage of. Would you pay for a computer you couldn’t tell was broken or not?

Their product page for promoting their email program was less than satisfactory at explaning and promoting in a manner that impressed anyone more than an idiot who believed shiny buttons and bright colours.

Sign up Spam

After signing up I instantly got this spam page of the CEO spamming me with his new book with the exact page you would see from, say, those scam weight loss berries or what not.

Spam Screen AWeber

Click on the image to see a full 1080 screen print of the page to read the entire thing.

After getting this page their credibility was thrown into question.

Signing up new Leads

No matter how you import your contacts, whether from your old system (where they may have already done the two step auth) or by entering them manually within the account control panel you must process AWebers own two step authentication.

This means that ALL your leads will get a new confirmation message out of no-where. Might not seem bad? Think again! Imagine I move all the list for the company I work at out of NetSuite into AWeber. I have no choice but to notify every single person that this change has been done and that they must effectively reconfirm they wish to receive newsletters from our company.

I wonder how many would not be bothered to click that link? I would not, I signed up once; why do it again?

This also has another implication, normally when signing upto a site you would think that you also sign up to get bulk email from the, not here. After getting your confirmation email you would get another email requesting to do more stuff in order to benefit from your membership. Unacceptable, things should be easy, not difficult.

You may say, “Ah but what about https://help.aweber.com/entries/21664348-Can-I-Disable-Confirmed-Opt-In- ?” I would reply that the turning off of confirmed opt-in only works from web form. This means that if you import a list you already have they still need to be confirmed opt-in. There is no way around this.

Interface

Their interface is so confusing, to just setup a list I am sent through, what seems like, 20 different pages and very little descriptions (“WTF is a list description? Will my subscribers be seeing it?”) and overall just badly put together and thought out. Mailchimps inteface is a LOT nicer and easier to use.

The good news is that once you are left to your own devices it gets a lot easier to navigate the interface.

API Documentation

The API documentation is not nerd friendly let alone none nerd friendly ( https://labs.aweber.com/docs/reference/1.0#api-map ) I am honestly lost in that mess; how do I upload ECommerce data to segment on products customers bought?

I got hold of MailChimps API documentation in about 10 minutes, however, I have been looking at AWebers for about the last 30 minutes and I am still trying to work out exactly how it works.

As a developer I have had to work around a lot of bad documentation and AWebers easily comes in close what Apache Thrift’s was like (incase your wondering, it had NO documentation, not even commented code).

Email Editor

Although really complicated to get to grips with I have got to be honest when I say this: it is awesome! It can do the one thing that MailChimp cannot, it gives you control over the layout of your email without having to use set,static templates. Simply select the outer container, vertical split, add a new container, vertical split again as many times as you want and, bam; you have your custom email design.

I love AWebers email editor, it is a liberal version of MailChimps.

However, a downer, the custom forms for dealing with subscription handling is just not as nice looking or feeling as MailChimps. It would require work to make them so.

That was all I tested before I wrote this.

Open Source

This is actually a positive aspect. A quick Googler search brought up AWeber GitHub: https://github.com/aweber

It is good to see that at least some one believes in collaberating, however, their API is still not on Composer and it seems the developers are not frequenting GitHub to check the status of pull requests and issues.

Conclusion

MailChimp may be a heartless, soulless shell of a corporation but it has quickly caught up and surpassed the service that AWeber provides.

I guess the take away feeling is that of depression (herp derp): I have no choice but to go with MailChimp currently.

Summarise: A jQuery Plugin to make alerts easier

Quickly had a reason to make a small plugin that could handle making alerts within most generic apps a bit more responsive and easier to change and manipulate with Ajax requests.

It is really simple and easy to use for your alerts.

I should note I used Bootstrap CSS classes, syntax and styling for this plugin actually however, this plugin is not limited to Bootstrap in anyway and should provide configuration abilities for any site and CSS framework.

You can find more on this plugin on Github: https://github.com/Sammaye/summarise

Hope it helps,

SEO Keyword Density

Okay, so recently I was posed with the question about SEO keyword density. Basically I was tasked with adding a feature to the administration section of the company I work for to check that the description of a product had a certain keyword density that mattered to search engines. As reference I was given the Yoast wordpress plugin, whereby it actually gives a keyword density checker.

I was sceptical about exactly why keyword density was so important rather than placement of the keywords, having seen many “myths” in SEO.

What do I mean by placement over density? Placement could be class as putting the keywords in the right places like in the meta, page title, header tags and content in a natural manner.

So I decided to research this a little. I soon came across: http://www.highervisibility.com/blog/what-is-the-proper-keyword-density-for-seo/ which not only provides a meta type discussion between numerous SEO experts but also provides a short but descriptive abstract from Matt Cutts:

“the first time you mention a word, you know, ‘Hey, that’s pretty interesting. It’s about that word.’ The next time you mention that word, ‘oh, OK. It’s still about that word.’ And once you start to mention it a whole lot, it really doesn’t help that much more. There’s diminishing returns. It’s just an incremental benefit, but it’s really not that large.”

So already Matt states that really once you start going crazy with those words and trying to cram them in they become almost useless.

What is also more interesting is that one of the consulted SEO experts actually did some research to find out the optimum keyword density for Google and other search engines varey:

He used pictures from gorank.com to determine that Yahoo recommends a keyword density of about 3% while Google seems to like sites that have a 1-2% keyword density. Below is an example of the chart he used to form this opinion:

With this in mind assuming one single density for all could be dangerous, what if the search engine thinks you are keyword stuffing? This used to be a common (and can still be) problem whereby scammers and hackers would keyword stuff to ensure they get fake sites to the top. So you can imagine that if a search engine thinks you are keyword stuffing they will actually give you a penalty possibly?

In fact all of the consulted SEO experts seem to agree that keyword density:

  • Is not a fixed calculatable number
  • Is not a big problem to most sites
  • Could be premature optimisation for a page (if you are a programmer you will understand this one)

So with all this in mind I decided to recommend, with evidence, that maybe we should rethink our SEO tactics and not fall into this common myth trap.

MongoYii – A Yii MongoDB ORM

I had more spare time recently and decided to make an ORM for Yii.

It supports MongoDB and attempts to conform to Yiis own standards as much as possible; it is named MongoYii.

I won’t explain much more here since the repository has a pretty comprehensive readme that will describe the project fully to you:

https://github.com/Sammaye/MongoYii

Debug MongoDB Map Reduce with the Mongo MapReduce Web browser

I am in the process of rewriting this, if you want to see what I am excited about then checkout the repo here: https://github.com/angelozerr/mongo-mapreduce-webbrowser

You may, or may not, remember (depending whether you are new or returning to this blog) I wrote a post sometime ago about a new Map Reduce debug tool within your very own browser – using jQuery. My original sentiment on the subject was that it was still too rough to test, however, the author @angelozerr and @pascalleclercq have worked hard on the release of v1.0.0 with the intention of making it stable and boy, have they made it stable.

Here is the live demo. I recommend you try it out, I am going to blabber on down here about some of the things you can with it now.

First off, you are greeted by an interface with the folders containing your map reduces on one side and the potential preview of your files on the other. If you double click (I tried single clicking at first too) on any of the example map reduces then you will see it load up in the right hand pane.

Example Mongo MapReduce WebBrowser

Feel free to click on the image to zoom in, it should be 1080 compatible. From the screen shot you can see what I mean about the explanation. As an added note there is also a tabbed interface for map reduces (look to the top grey bar) for being able to load multiple files into memory at once.

Some of the key changes to the previous version I looked at is the editor (the right pane in the screen shot). It boats:

  • Code Completion
  • Syntax Highlighting
  • Code validation (with hint popups for telling you the exact error)

For each step of the map reduce. It will even underline errors for you so you can quickly and easily get to them without having to check what the icon in the side bar says. Too add a cherry on top, it also has a live preview where your edits will change the output immediately. All in all this is one slick application.

I had none of the previous problems I experienced, quickly and easily building my own map reduce and seeing its output.

There is still one problem that might catch people out: the input documents cannot take BSON objects such as the ObjectId() yet as such this means you cannot yet input:

[{'_id':ObjectId("455556676678"), "name":"sammaye"}]

That being said you can input documents in their deserialised format like:

[{"_id":{"$oid":"50d8fdedadd222278ba9090f"},"date":{"$date":"2012-12-25T01:14:21.312Z"},"url":"http;//iiiiiiiiiii"}]

However, it currently cannot convert console copied document down to this format. @ angelozerr has some ideas on how he wants to solve this. I am certain this is a feature that will soon be added in one form or another. If it had this feature it looks almost ready to use, I even tested this with around 1,000 documents on the count_tags.js example and it worked, it almost killed Firefox for a while but it worked and it did its job eventually.

I really do recommend now taking a look at mongo-mapreduce-webbrowser and trying it for yourself. It is definitely a tool that no MongoDB user (or map reduce) user should go without.

Follow

Get every new post delivered to your Inbox.