Skip to content

Share
Share

Quantity of Evaluation: 

The controversial nature of the Intervention and the need for expenditure to be accounted for has meant that there have been a large number of evaluations undertaken regarding various aspects of the Intervention. Within five years of the establishment of the Intervention, by December 2012, 98 reports, seven parliamentary inquiries and hundreds of submissions had been completed. However, the sheer quantity of these reports actually hinders the evaluation process, as it obstructs proper evaluation of effectiveness.

Impartiality of Evaluation:

The majority of evaluations of the Intervention have been undertaken by government departments and paid consultants. Australian National University researchers Jon Altman and Susie Russell suggest that the evaluation of the Intervention, instead of being an independent objective process, has been merged into the policy process and, in many cases, is performed by the policy-makers themselves. This means there is a real risk of evidence being ignored or hidden to suit an agenda.

Independent reports and government commissioned reports have often contradicted each other, with the government seeking to discredit independent reports rather than gathering additional data. This includes independent reports by researchers at Jumbunna Indigenous House of Learning at the University of Technology Sydney, Concerned Australians and the Equality Rights Alliance, all of which have often come to different conclusions than government reports.

Quality and Consistency of Evaluation:

The 'final evaluation' of the Intervention under the NTNER occurred in November 2011 with the publication of the Northern Territory Emergency Response Evaluation ReportHowever, the Stronger Futures legislation did not come into effect until August 2012. This left eight months unaccounted for.

Closingthe Gap in the Northern Territory Monitoring Reports are conducted every six months. A significant criticism is that they focus on bureaucratic 'outputs' rather than outcomes. Income management studies, for example, have reported on 'outputs' such as the number of recipients of the Basics Card or the total amount of income quarantined, rather than focusing on the card's effectiveness for health and child protection outcomes.

Much of the data collected has also relied on self-assessment in the form of surveys, such as asking individuals to rate their own health rather than collecting and analysing data on disease. Another issue is the ad hoc nature of some reports. For example, the review of the Alcohol Management Plan in Tennant Creek was only conducted once. This makes it difficult to make comparisons over the life of the policy and evaluate the effectiveness of particular measures.

Independent statistical data can be hard to find, since information compiled by the Australian Bureau of Statistics is national in scope and cannot be translated directly into the context of the individual Indigenous communities in the Northern Territory. Indigenous Australians also have a lower median age than other Australians, meaning data on employment rates or incarceration rates can be statistically skewed.

Benchmarks for Evaluation:

ANU researchers Jon Altman and Susie Russell have noted that the "absence of an overarching evaluation strategy has resulted in a fragmented and confused approach". They found that the 2007 Intervention did not have any documentation articulating the basis of the policy, nor how it should be evaluated. The first document to address this was the unpublished Program Logic Options Report which was developed in 2010; three years after the Intervention began. This means that there are no original benchmarks for evaluation, and that the decision to extend the program in 2012 was made without clear evidence as to its effectiveness. Furthermore, there is a limited connection between the benchmarks proposed in the 2010 Report and those used in later evaluations.

in Research