utils.py

class ucas_dm.utils.Evaluator(data_set)[source]

Bases: object

This class provide some methods to evaluate the performance of a recommend algorithm. Two measures are supported for now: Recall and Precision

__init__(data_set)[source]
Parameters:data_set – User view log.
static _log_to_file(algo_dict, **kwargs)[source]

This func will save algorithm’s dict data and it’s performance data to ./performance.log in json format.

Parameters:
  • algo_dict – algorithm’s dict format.
  • kwargs – Performance data of the algorithm.
evaluate(algo=<ucas_dm.prediction_algorithms.base_algo.BaseAlgo object>, k=[], n_jobs=1, split_date='2000-1-1', debug=False, verbose=False, auto_log=False)[source]
Parameters:
  • algo – recommend algorithm
  • k – list of integers represent the number of recommended items.
  • n_jobs – The maximum number of evaluating in parallel. Use multi-thread to speed up the evaluating.
  • split_date – on which date we split the log data into train and test.
  • debug – if true, the evaluator will use 5000 instances in data set to run the test.
  • verbose – whether to print the total time that evaluation cost.
  • auto_log – if true, Evaluator will automatically save performance data to ‘./performance.log’
Returns:

average recall and precision