Monthly Archives: September 2013

Configuring Quartz Scheduler to Run in Clustered Environment

The goal of running a Quartz job in the clustered environment is NOT to have duplicate running jobs. The triggered job should run just one time regardless of the number of nodes in the clustered environment.

  1. Download Quartz and extract the file.
  2. Navigate to quartz-x.x.x -> docs -> dbTables and run the database SQL script to create the Quartz tables.

    quartz-x.x.x
    |- docs
       |- dbTables
          |- tables_<database>.sql <- Pick one that matches your database
       |- images
    |- examples
    |- javadoc
    |- lib
    |- licenses
    |- src
    

  3. Add the Quartz dependency in pom.xml:-

    <dependency>
        <groupId>org.quartz-scheduler</groupId>
        <artifactId>quartz</artifactId>
        <version>2.2.0</version>
    </dependency>
    

  4. Create quartz.properties file under src/main/resources with the following configuration:-

    #============================================================================
    # Configure Main Scheduler Properties
    #============================================================================
    
    org.quartz.scheduler.instanceId = AUTO
    org.quartz.scheduler.makeSchedulerThreadDaemon = true
    
    #============================================================================
    # Configure ThreadPool
    #============================================================================
    
    org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
    org.quartz.threadPool.threadCount = 1
    org.quartz.threadPool.makeThreadsDaemons = true
    
    #============================================================================
    # Configure JobStore
    #============================================================================
    
    org.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.MSSQLDelegate
    org.quartz.jobStore.isClustered = true
    

    Remember to change org.quartz.jobStore.driverDelegateClass value to match your database type. In my case, I’m using MS SQL Server.

  5. I want to be able to autowire Spring beans in my Quartz job. So, I created a custom job factory called AutowiringSpringBeanJobFactory:-

    public final class AutowiringSpringBeanJobFactory extends SpringBeanJobFactory 
        implements ApplicationContextAware {
    
        private transient AutowireCapableBeanFactory beanFactory;
    
        @Override
        public void setApplicationContext(final ApplicationContext context) {
            beanFactory = context.getAutowireCapableBeanFactory();
        }
    
        @Override
        protected Object createJobInstance(final TriggerFiredBundle bundle) 
            throws Exception {
            final Object job = super.createJobInstance(bundle);
            beanFactory.autowireBean(job);
            return job;
        }
    }
    

  6. In my Quartz job, I can reuse my existing Spring bean by autowiring it:-

    @Service
    @DisallowConcurrentExecution
    public class MyJob implements Job {
        @Autowired
        private MyService myService;
    
        @Override
        public void execute(JobExecutionContext jobExecutionContext) 
            throws JobExecutionException {
            System.out.println("Message: " + myService.getHelloWorld());
        }
    }
    

  7. In this example, I’m creating a Java-based Spring configuration called QuartzConfig:-

    @Configuration
    public class QuartzConfig {
    
        // this data source points to the database that contains Quartz tables
        @Autowired
        private DataSource dataSource;
    
        @Autowired
        private PlatformTransactionManager transactionManager;
    
        @Autowired
        private ApplicationContext applicationContext;
    
        @Bean
        public SchedulerFactoryBean quartzScheduler() {
            SchedulerFactoryBean quartzScheduler = new SchedulerFactoryBean();
    
            quartzScheduler.setQuartzProperties(quartzProperties());
            quartzScheduler.setDataSource(dataSource);
            quartzScheduler.setTransactionManager(transactionManager);
            quartzScheduler.setOverwriteExistingJobs(true);
    
            // Custom job factory of spring with DI support for @Autowired
            AutowiringSpringBeanJobFactory jobFactory = new AutowiringSpringBeanJobFactory();
            jobFactory.setApplicationContext(applicationContext);
            quartzScheduler.setJobFactory(jobFactory);
    
            Trigger[] triggers = {
                    processMyJobTrigger().getObject()
            };
    
            quartzScheduler.setTriggers(triggers);
    
            return quartzScheduler;
        }
    
        @Bean
        public JobDetailFactoryBean processMyJob() {
            JobDetailFactoryBean jobDetailFactory = new JobDetailFactoryBean();
            jobDetailFactory.setJobClass(MyJob.class);
            jobDetailFactory.setDurability(true);
            return jobDetailFactory;
        }
    
        @Bean
        // Configure cron to fire trigger every 1 minute
        public CronTriggerFactoryBean processMyJobTrigger() {
            CronTriggerFactoryBean cronTriggerFactoryBean = new CronTriggerFactoryBean();
            cronTriggerFactoryBean.setJobDetail(processMyJob().getObject());
            cronTriggerFactoryBean.setCronExpression("0 0/1 * * * ?");
            return cronTriggerFactoryBean;
        }
    
        @Bean
        public Properties quartzProperties() {
            PropertiesFactoryBean propertiesFactoryBean = new PropertiesFactoryBean();
            propertiesFactoryBean.setLocation(new ClassPathResource("quartz.properties"));
            Properties properties;
    
            try {
                propertiesFactoryBean.afterPropertiesSet();
                properties = propertiesFactoryBean.getObject();
            }
            catch (IOException e) {
                throw new RuntimeException("Unable to load quartz.properties", e);
            }
    
            return properties;
        }
    }
    

Advertisements

Configuring Remote UPS Shutdown on Mac OS X

I have APC Smart-UPS SMT1500. It contains a SmartSlot that accepts a network management card that allows the UPS to be connected to the switch (or router) instead of plugging the USB cable from the UPS to a computer. That said, I also have APC AP9630 UPS Network Management Card 2.

My goal is to configure my UPS to shutdown all my Macs (MacBook Pros and Mac Mini) during a power failure.

This is how I do it…

  1. After installing the network management card to the UPS, plug the network cable from the network management card to the switch (or router). An IP address will be automatically assigned to the card. Let’s just say, the resolved IP address is 192.168.0.100.
  2. Plug the Mac’s power plug to the UPS.
  3. Download Apcupsd and install it on your Mac.
  4. At the end of the installation, you will see /etc/apcupsd/apcupsd.conf file on your screen. Do not click on the “restart” button yet.
  5. In that file, change the following lines:-

    UPSCABLE smart
    UPSTYPE snmp
    DEVICE 192.168.0.100:161:APC:public 
    BATTERYLEVEL 95
    

    The DEVICE value is IP:PORT:VENDOR:COMMUNITY, all you need is to change the IP address and leave the rest of to be the same.

    Based on the documentation, BATTERYLEVEL, MINUTES, and TIMEOUT work in conjunction… basically, the first that occurs will cause the initiation of a shutdown. So, since I want my Mac to shutdown right away during a power failure, I set my BATTERYLEVEL at 95%. When the UPS’s battery level drops before that mark, it will initiate the shutdown process.

  6. Save this file and reboot your machine.
  7. After rebooting the machine, to ensure the Apcupsd daemon is getting signals from the UPS, run this command:-

    sudo apcaccess
    

    You should see your Apcupsd configuration and the UPS information, like this:-

    APC      : 001,050,1249
    DATE     : 2013-09-24 20:09:37 -0500  
    HOSTNAME : goliath
    VERSION  : 3.14.10 (13 September 2011) darwin
    UPSNAME  : APCUPS
    CABLE    : Ethernet Link
    DRIVER   : SNMP UPS Driver
    UPSMODE  : Stand Alone
    STARTTIME: 2013-09-24 18:29:28 -0500  
    MODEL    : Smart-UPS 1500
    STATUS   : TRIM ONLINE 
    LINEV    : 125.0 Volts
    LOADPCT  :   0.0 Percent Load Capacity
    BCHARGE  : 100.0 Percent
    TIMELEFT : 335.0 Minutes
    MBATTCHG : 95 Percent
    MINTIMEL : 150 Minutes
    MAXTIME  : 0 Seconds
    MAXLINEV : 126.0 Volts
    MINLINEV : 125.0 Volts
    OUTPUTV  : 110.0 Volts
    SENSE    : High
    DWAKE    : 000 Seconds
    DSHUTD   : 000 Seconds
    DLOWBATT : 10 Minutes
    LOTRANS  : 106.0 Volts
    HITRANS  : 127.0 Volts
    RETPCT   : -1073742588.0 Percent
    ITEMP    : 28.0 C Internal
    ALARMDEL : No alarm
    BATTV    : 27.0 Volts
    LINEFREQ : 60.0 Hz
    LASTXFER : Line voltage notch or spike
    NUMXFERS : 0
    TONBATT  : 0 seconds
    CUMONBATT: 0 seconds
    XOFFBATT : N/A
    SELFTEST : OK
    STESTI   : OFF
    STATFLAG : 0x0700000A Status Flag
    MANDATE  : 06/17/2013
    SERIALNO : AS1325121222
    BATTDATE : 06/17/2013
    NOMOUTV  : 120 Volts
    NOMBATTV : 3221224708.0 Volts
    HUMIDITY : 3221224708.0 Percent
    AMBTEMP  : 3221224708.0 C
    EXTBATTS : -1073742588
    BADBATTS : -1073742588
    FIRMWARE : UPS 08.3 / MCU 14.0
    END APC  : 2013-09-24 20:10:00 -0500
    

  8. Finally, pull the UPS plug from the wall to simulate a power failure. Once the battery level drops below your threshold, your machine will begin to shutdown in a minute or so.

Useful Information

If you tweak the Apcupsd configuration in /etc/apcupsd/apcupsd.conf file, you will need to restart Apcupsd daemon:-

sudo /Library/StartupItems/apcupsd/apcupsd restart

Managing the Order of AJAX Calls on Input Field’s Keyup Event

SCENARIO

Consider the following code:-

$('#employeeSearchField').keyup(function() {
	var query = $(this).val();

	$.get('/employees/api/search', { q: query }, function(data) {
		// do stuff
		...
	});

}).trigger('keyup');

When user types an employee’s name, “Mike”, in the search field, a web service call is fired per character typed. In this example, the following web service calls are made:-

  • GET /employees/api/search?q=M
  • GET /employees/api/search?q=Mi
  • GET /employees/api/search?q=Mik
  • GET /employees/api/search?q=Mike

Let’s assume this web service searches the input string against databases (or flat files, Facebook API, etc) and returns a list of employee JSON objects where their names match the given input string. The code above will take the result and display the employee list on the view.

PROBLEM

Since we can’t control how long each web service call takes to process the request, the order of the returned JSON objects might not match the order of the web service calls. As a result, we may have stale information being presented on the view. Further, we may have the annoying “flicker” problem where the old employee list overrides the new employee list on the view.

SOLUTION

To ensure the order of the returned JSON objects matches the order of the web service calls, we need to keep track each AJAX call’s timestamp. In this example, I’m using Moment.js, a date library, but you can also use the built-in Date object. For now, think of Moment.js as a Swiss Army Knife for date, or a Rambo Knife for date, or a MacGyver Knife for date… okay, maybe not MacGyver Knife since MacGyver can use a tooth pick to solve the date problem.

// keep track latest AJAX call's timestamp
var latestAjaxCallDateTime;

$('#employeeSearchField').keyup(function() {
	var query = $(this).val();

	// before executing the AJAX call, store the latest timestamp
	latestAjaxCallDateTime = moment();
	
	// set the "soon-to-be-executed" AJAX call's timestamp to be the same 
	// as the latest timestamp
	var currentAjaxCallDateTime = latestAjaxCallDateTime;

	$.get('/employees/api/search', { q: query }, function(data) {

		// if current timestamp is older than the latest timestamp, then 
		// omit the request 
		if (currentAjaxCallDateTime.isBefore(latestAjaxCallDateTime)) {
			return;
		}

		// do stuff
		...
	});

}).trigger('keyup');

Yes, this looks pretty hacky, but it works. The whole idea is we will not process the result if it is old. The key of making this whole thing work is to set latestAjaxCallDateTime as a global variable and set currentAjaxCallDateTime as a global variable WITHIN the anonymous function of the input field’s keyup event.

Pretty Print JSON in JavaScript

PROBLEM

You want to display a JSON object in JavaScript.

TAKE 1

console.log(json);

While this works, I find this approach inconvenient when viewing the output in Firebug because I have to click on each generated link to view the details.

TAKE 2

console.log(JSON.stringify(json));

… will generate this:-

[{"title":"Holiday","id":"a1","start":"2014-02-03T09:00:00.000Z","allDay":true},{"title":"Pay Day","id":"a2","start":"2014-03-31T08:00:00.000Z","allDay":true}]

This approach will display the entire JSON object as one long string. This is better than me clicking on each generated link, but this is still fairly unreadable and cumbersome to me if I have a large JSON object.

TAKE 3

console.log(JSON.stringify(json, null, '\t'));

… will generate this:-

[
	{
		"title": "Holiday",
		"id": "a1",
		"start": "2014-02-03T09:00:00.000Z",
		"allDay": true
	},
	{
		"title": "Pay Day",
		"id": "a2",
		"start": "2014-03-31T08:00:00.000Z",
		"allDay": true
	}
]

Aw snap… a nicely formatted JSON string.

Reading Directory/File’s ACL Directly from Java

Prior to Java 7, there’s no way to read a directory/file’s ACL directly from Java. With Java 7, you can write something like this:-

// this can be a directory or a file
String pathName = "C:\\Users\\thundercat\\Desktop";

Path path = Paths.get(pathName);

try {
    FileOwnerAttributeView fileAttributeView = Files.getFileAttributeView(path, FileOwnerAttributeView.class);

    System.out.println("Owner:\n\t" + fileAttributeView.getOwner());

    AclFileAttributeView aclFileAttributeView = Files.getFileAttributeView(path, AclFileAttributeView.class);

    if (aclFileAttributeView != null) {
        System.out.println("ACL: ");
        for (AclEntry aclEntry : aclFileAttributeView.getAcl()) {
            System.out.println("\t" + aclEntry.principal());
            System.out.println("\t\t" + aclEntry.permissions());
        }
    }
} catch (IOException e) {
    e.printStackTrace();
}

When you execute the code above, you will get something like this:-

Owner:
	BUILTIN\Administrators (Alias)
ACL: 
	NT AUTHORITY\SYSTEM (Well-known group)
		[APPEND_DATA, WRITE_ATTRIBUTES, DELETE, SYNCHRONIZE, READ_DATA, WRITE_ACL, WRITE_DATA, READ_ATTRIBUTES, WRITE_NAMED_ATTRS, READ_ACL, DELETE_CHILD, WRITE_OWNER, EXECUTE, READ_NAMED_ATTRS]
	BUILTIN\Administrators (Alias)
		[APPEND_DATA, WRITE_ATTRIBUTES, DELETE, SYNCHRONIZE, READ_DATA, WRITE_ACL, WRITE_DATA, READ_ATTRIBUTES, WRITE_NAMED_ATTRS, READ_ACL, DELETE_CHILD, WRITE_OWNER, EXECUTE, READ_NAMED_ATTRS]
	MYDOMAIN\thundercat (User)
		[APPEND_DATA, WRITE_ATTRIBUTES, DELETE, SYNCHRONIZE, READ_DATA, WRITE_ACL, WRITE_DATA, READ_ATTRIBUTES, WRITE_NAMED_ATTRS, READ_ACL, DELETE_CHILD, WRITE_OWNER, EXECUTE, READ_NAMED_ATTRS]

Managing Log4j Configuration for Both Development and Production Environments

PROBLEM

Most of the time, we set the Log4j’s log levels to something lower (debug or info) during our local development. Once it is ready for production, we normally set the Log4j’s log levels to something higher (warn or even error) to prevent meaningless information from flooding the server log.

One way to do this is to manually adjust the log level(s) in log4.xml during our local development, for example:-

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration>
    <!-- This appender logs to the console -->
    <appender name="consoleAppender" class="org.apache.log4j.ConsoleAppender">
        <param name="Target" value="System.out"/>
        <layout class="org.apache.log4j.PatternLayout">
            <param name="ConversionPattern" value="[%-5p] [%c{1}] [%M:%L] - %m%n"/>
        </layout>
    </appender>

	<!-- Set "debug" log level for project code  -->
    <logger name="com.choonchernlim.myproject">
        <level value="debug"/>
    </logger>

	<!-- Set "info" log level for Spring framework  -->
    <logger name="org.springframework">
        <level value="info"/>
    </logger>

	<!-- Set "debug" log level for Hibernate framework  -->
    <logger name="org.hibernate">
        <level value="debug"/>
    </logger>

    <root>
    	<!-- The default log level is "warn" -->
        <priority value="warn"/>
        <appender-ref ref="consoleAppender"/>
    </root>
</log4j:configuration>

Once we are ready for production, we will manually change the package specific log levels back to warn or comment them out, for example:-

<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">

<log4j:configuration>
    <!-- this appender logs to the console -->
    <appender name="consoleAppender" class="org.apache.log4j.ConsoleAppender">
        <param name="Target" value="System.out"/>
        <layout class="org.apache.log4j.PatternLayout">
            <param name="ConversionPattern" value="[%-5p] [%c{1}] [%M:%L] - %m%n"/>
        </layout>
    </appender>

	<!-- Comment out package specific log levels
    <logger name="com.choonchernlim.myproject">
        <level value="debug"/>
    </logger>

    <logger name="org.springframework">
        <level value="info"/>
    </logger>

    <logger name="org.hibernate">
        <level value="debug"/>
    </logger>
	-->
	
    <root>
		<!-- The default log level is "warn" -->
        <priority value="warn"/>
        <appender-ref ref="consoleAppender"/>
    </root>
</log4j:configuration>

The problem with this approach is I always forget to make necessary changes in log4j.xml when I’m ready to package my project to be deployed in production.

SOLUTION

The solution that I come up with is rather simple. We will have two Log4j XML files under src/main/resources:-

  • log4j.xml
  • log4j-dev.xml

By default, assuming there’s no further Log4j configuration, log4j.xml will always get picked up by Log4j due to the default filename and its location. log4j-dev.xml will always be ignored by Log4j.

To ensure our local development picks up the configuration from log4j-dev.xml, we will need to make a minor tweak to the Jetty configuration in pom.xml:-

<project ...>
	...
    <build>
        <plugins>
            <plugin>
                <groupId>org.mortbay.jetty</groupId>
                <artifactId>jetty-maven-plugin</artifactId>
                <version>8.1.8.v20121106</version>
                <configuration>
                    <systemProperties>
                        <!--
                        When Jetty runs, "log4j-dev.xml" will be used instead of 
						"log4j.xml" because the latter is reserved for production 
						usage.
                        -->
                        <systemProperty>
                            <name>log4j.configuration</name>
                            <value>log4j-dev.xml</value>
                        </systemProperty>
                    </systemProperties>
					...
                </configuration>
                <dependencies>
					...
                </dependencies>
            </plugin>
        </plugins>
    </build>
</project>

This way, when we run our local development on Jetty, log4j-dev.xml will be used. When we deploy the project in production, log4j.xml will be used instead.