Anyone who grew up on PCs will be familiar with the old Shift+Delete trick—that combination of keystrokes that immediately and permanently nukes a file. For many, it’s a surprise to find that OS X doesn’t offer that functionality without cumbersome Terminal commands or third-party apps. Instead, you must first move a file to Trash, and then “empty” the Trash. Only then is a file truly gone—along with everything else in your Trash.

But what if you only want to delete a particular file at a given time? With Automator, you can add the functionality to your context menus (“right-click” menus) using the Services workflow.

You can create an Automator service or application to facilitate executing the rm shell commando, which will permanently delete files or folders and skip the trash.

To begin, start by creating a new Service in the Automator app.

Select “files or folders” as input; you may also want to limit the availability of this service to the Finder app.

Optional—but highly recommended to prevent accidental deletion—is the addition of an Ask for Confirmation step to the workflow. Drag it into your workflow from the column at left, then customize the message and the Cancel and OK buttons to your liking.

Finally, add the Run Shell Script step to the workflow, again dragging from left. Make sure to pass input “as arguments”. Then you can put in the following script:

for f in “$@”
do
rm -rf “$f”
done

To complete the process with a bit of audio confirmation, add the OS X “emptied trash” sound by including the following command at the end of the shell script:

afplay “/System/Library/Components/
CoreAudio.component/Contents/SharedSupport/
SystemSounds/finder/empty trash.aif”

Save your service, and it should be ready to use in Finder from the Services menu in the menu bar. You can also configure a keyboard shortcut to your service in the Keyboard preference pane of System Preferences.

Source/Screenshots : http://apple.stackexchange.com/questions/66369/how-can-i-skip-the-trash-when-deleting-a-file

Enabling JSP support in Jetty.7.0.0

Posted: May 6, 2013 in Java
Tags: , , ,

Jetty7, doesn’t enable jsp support by default, so after  trying for more some time I came across  this blog (suggested to download Jetty from codehaus which comes with jsp libraries). I started with javax.servlet-api-3.0.1.jar and jetty-all.jar  but eventually able to resolve all dependencies to get  my website working. I used almost more than 10 jars to resolve dependencies. Most of the time I just needed some of the classes from those jars to create combination to support my application. Its application specific, it will differ based on various libraries you use in your application.

All the jars I used to make my website up and running:
1. javax.servlet-api-3.0.1.jar
2. jetty-all-8.1.9.v20130131.jar
3. org.apache.jasper.glassfish_2.1.0.v201007080150.jar
4. commons-logging-1.1.2.jar
5. jsp-2.1-6.0.2.jar
6. jetty-util-6.1.12.jar
7. j2ee.jar
8. jsp-api-6.0.14.jar
9.ant-1.7.1.jar
10.ant-launcher-1.7.1.jar
11. jsp-api-2.1-glassfish-9.1.1.B51.p0.jar
But, this is not enough, if you start the server you get an error like this:

INFO::NO JSP Support for /, did not find org.apache.jasper.servlet.JspServlet

So you have to enable jsp support in jetty passing additional undocumented options to the jetty start script:

$ java -jar jetty.jar OPTIONS=Server,jsp

The best option to start jetty and customize it is using the provided script: bin/jetty.sh. But you have to do these steps:

# change the permissions of the provided script. $ chmod u+x bin/jetty.sh # create a file where you can put customized options. $ cat /etc/default/jetty JETTY_PORT=80 JETTY_HOME=/usr/local/jetty JAVA_OPTIONS="$JAVA_OPTIONS -Dartifactory.home=/mirai/DATA/artifactory" JETTY_ARGS=OPTIONS=Server,jsp # create the link in /etc/init.d $ sudo ln -s /usr/local/jetty/bin/jetty.sh /etc/init.d/jetty # use this script to start/stop/restart the server $ /etc/init.d/jetty <start|stop|restart>
Hope it helps. drop me a note if in case any queries.

At the time of writing, the number of apps in the iPhone app store looks set to push through the 1 million mark [Already crossed 1 million mark] for the first time. This means that developers and publishers increasingly need to turn to Appstore SEO as a way of improving visibility and driving downloads. Hundreds of agencies are already beginning to offer SEO for the iphone app store as a service, but there’s some simple stuff you can do yourself to improve the visibility of your app.

At present, the iPhone app store search system pretty unsophisticated, which means that some of the tricks that stopped working many years ago in the PC web can still be effective. However, the app store search algorithm is rapidly evolving and already some methods such as keyword stuffing no longer work so easily. That said, there’s still a lot you can do so here’s a guide to the top 7 things you can do to improve your ranking.

1. Use keywords in the app title and developer name. For example, a user searching for “dining” sees a number of apps with “dining” in the title in the top search rankings.

2. Use numbers or the letter ‘A’ in the app title – Using a blank or listing your app with a number at the beginning of the name ensures that the app will be featured near the top of the results in the “sort by name” view under a specific category.

3. Encourage reviews – Only about 0.5% of users leave a review, but one element to the Apple store search algorithm appears to be the number of reviews. Write your own reviews, encourage friends and family, as well as users.

4. Use price drops – a quick way to move up the charts is to reduce the price of your app. For example, one game experimented by cutting the price to 99 cents from $2.99 for two days. Previous to the price drop, the app had been ranked at around 100 in top paid apps. Afterwards, it shot up to second place within 24 hours.

5. Make your app icon active – This is a visual trick that can be very powerful as it catches the users eye. Specially some 3D looking icons.

6. Make the most of your screenshots – make sure the screenshots you display are as attractive a possible and draw people in to want to use the app.

7. List your app in directories – List your app in third party directories and aggregators. Each listing links to your iPhone app store page from within a relevant category to help drive users, downloads and reviews.

Hope this helps. Good Luck!!

Working on my iphone app for last couple of months and all good so far, until one day when I started using awesome Google Analytics library(googleAnalytics.a)  and tried to add it   into my project repository.

Surprisingly,  I was still not able to see the .a file in my svn client even though its there in my project(local copy)  and its never committed. So I googled about it on net and discovered  svn:ignore properties of Subversion.

svn:ignore property contains a list of file patterns which certain Subversion operations will ignore. it works in conjunction with the global-ignores run-time configuration option (see the section called “Config”) to filter unversioned files and directories out of commands svn statussvn add, and svn import.

In my case, .a files are set as ignored files, so i cannot see it in my file listing of svn client and only solution I thought of is to remove *.a from local ignored list. But its tricky.

Fortunately, I found another solution,  Select View -> ignored files and you will see all ignored files in your svn client. So simply add and commit. Done!!

So easy, right!!!!

Thanks to post from Nick

On Stack Overflow there has been some interest in how to use the Delegate design pattern in Objective-C. Of course, the first step in any search should be to read Apple’s documentation, but many people seem not to want to read the whole thing. Trust me, folks, it is worth it. But Apple’s documentation on creating delegates doesn’t use protocols, which are such an amazing and useful part of the Objective-C language.

That being said, I’d like to give a short demonstration of how to create a class with a delegate. For the purposes of this tutorial, we’ll call our class JSTutorial.
The interface to JSTutorial starts out as the following:

@interface JSTutorial: NSObject {
  NSString *title;
  NSString *body;
}
- (void)generateTutorial;
@property (nonatomic, retain) NSString *title;
@property (nonatomic, retain) NSString *body;
@end

Now, we need to modify this interface to include a delegate protocol:

@protocol JSTutorialDelegate;
@interface JSTutorial: NSObject {
  NSString *title;
  NSString *body;
  id <JSTutorialDelegate> delegate;
}
- (void)generateTutorial;
@property (nonatomic, retain) NSString *title;
@property (nonatomic, retain) NSString *body;
@property (nonatomic, assign) id <JSTutorialDelegate> delegate;
@end
@protocol JSTutorialDelegate <NSObject>
@optional
- (void)tutorialDidFinish:(JSTutorial *)tutorial;
@end

The implementation for JSTutorial might look like this:

@implementation JSTutorial
@synthesize title;
@synthesize body;
@synthesize delegate;
- (void)generateTutorial {
  // do something here?
  [[self delegate] tutorialDidFinish:self];
}
- (void)dealloc {
  [title release];
  [body release];
  [super dealloc];
}
@end

Finally, the class that implements JSTutorialDelegate might have its interface declared as follows:

@interface SomeClass : SomeControllerClass <JSTutorialDelegate>
// …
@end

SomeClass should implement tutorialDidFinish:, but it is optional.

I hope that this has helped those who were struggling with the delegate design pattern.

Detecting a shake using iPhone SDK

Posted: March 22, 2011 in iPhone

Accelerometer has added an all new dimension to the iPhone. There is no limit on how we can use the accelerometer API in iPhone SDK. The following are some of the well known simple uses of shake/motion detection

  1. Refresh the current view
  2. Go to next/previous screen
  3. Start editing
  4. Shuffle
  5. and the list goes on

Today lets check out how we can detect a simple shake using the API

In order to detect a shake, the class needs to conform to the protocol UIAccelerometerDelegate and reimplement the optional method (void)accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration.

- (void) accelerometer: (UIAccelerometer *)accelerometer didAccelerate: (UIAcceleration *)acceleration {
    if (self.lastAcceleration) {
        if (!shakeDetected && IsDeviceShaking(self.lastAcceleration, acceleration, 0.7)) {
            shakeDetected = YES;
            //SHAKE DETECTED. WRITE YOUR CODE HERE.
        } else if (shakeDetected && !IsDeviceShaking(self.lastAcceleration, acceleration, 0.2)) {
            shakeDetected = NO;
        }
    }
    self.lastAcceleration = acceleration; 
}

Note that in the class which re-implements this method we declare an UIAcceleration object named lastAcceleration and a BOOLEAN variable called shakeDetected.

And don’t forget to add these 2 lines into your viewDidLoad of view

[[UIAccelerometer sharedAccelerometer] setUpdateInterval:(1.0 / 40)];
[[UIAccelerometer sharedAccelerometer] setDelegate:self];

We also need to write a static method named IsDeviceShaking

Here’s the implementation of function IsDeviceShaking:

static BOOL IsDeviceShaking(UIAcceleration* last, UIAcceleration* current, double threshold) {
    double deltaX = fabs(last.x - current.x);
    double deltaY = fabs(last.y - current.y);
    double deltaZ = fabs(last.z - current.z);
    return (deltaX > threshold && deltaY > threshold) ||
           (deltaX > threshold && deltaZ > threshold) ||
           (deltaY > threshold && deltaZ > threshold);
}

The above function returns TRUE if the device is shook in any of the 2 axis and the shake is greater than the threshold. One can increase or decrease the sensitivity by changing the threshold.

Just put these pieces together and you can detect shake in any screen of your application you want.

let’s  look at the components that make up Audio Session Services: hardware queries
and route changes.

A common hardware query is to find out if there is an input device (e.g., microphone)  available. Since the first-generation iPod touch lacks  microphone, and the secondgeneration iPod touch requires an external microphone attachment, it is possible that a microphone will not be available.

The query looks like this:
UInt32 input_available = 0;
UInt32 the_size = sizeof(input_available);
AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &the_size,
&input_available);
The value will be returned to the variable input_available. The variable will be set to 1 if
input is available or 0 if no input is available. AudioSessionGetProperty() takes a key
constant as the first parameter, the size of the return variable, and a pointer to the return
value. This is a common pattern in Core Audio. Core Audio likes to use generic functions
for *GetProperty() and *SetProperty() so they can be reused for different things. Since
the API is C-based, this is achieved by passing pointers with size information and a key
constant.
Audio Session Services provides a property listener callback API, which you can set to
monitor property changes of specific supported types. As an example, we will set up a
simple property listener that triggers if an input device comes or goes.

AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable,
MyPropertyListener, self);

MyPropertyListener is defined as follows:

void MyPropertyListener(void* user_data, AudioSessionPropertyID property_id,
UInt32 data_size, const void* property_data)
{
if(kAudioSessionProperty_AudioInputAvailable == property_id)
{
if(sizeof(UInt32) == data_size)
{
UInt32 input_is_available = *(UInt32*) property_data;
if(input_is_available)
{
printf(” Input is available\n”) ;
}
else
{
printf(” Input is not available\n”) ;
}
}
}
}

Notice that the setup for a property listener callback is very similar to that of all the other
callbacks you have seen.